Consume an "asp service" with Java (without WSDL) - java

I'm not sure about the exact denomination, but I have an "web asp service" (from a .NET application) who gives me a XML file. (There is no WSDL...)
For exemple, I can question the web service with an URL like that :
http://Connectiquetest:86/Vaudoise/srv_db/db2_personnes.asp?Nopers=529720&SearchMode=1&Name=Smith
Where Nopers=52920, SearchMode=1 and Name=Smith are used to return to me the response I want. (The URL is for an internal system, you cannot access to it without a VPN)
The content type return is : "text/xml" (it's a classic well-formed XML).
So, how can I use this informations to make a call from a Java application to receive the XML ? Anybody can tell me what is it exactly (maybe a specific implementation of SOAP ?)
Thanks !

The most simple (and yet most labor intensive) approach would be to open an HttpURLConnection. You can do that by calling URL.openConnection(). The URL object would obviously be based on the URL you mentioned, optionally appending the parameters Nopers, SearchMode and Name. You could then call getInputStream() to access the response of the call. Next would be parsing the XML to interpret the result.
However, as said, this requires a lot of low-level code (especially when reading the input, parsing the XML, and so on). Consider using something like Apache HttpComponents for HTTP-related work, combined with something like JAXB for easily converting the retrieved XML to plain old Java objects.
Using JAXB might require you to have an XML schema. In the ideal situation, it already exists (written by the people who wrote the ASP service). In the sub-ideal situation, you'll have to derive it yourself.

Related

Need a JAVA API that return different responses to different client's [POST REST call] [duplicate]

After having read a lot of material on REST versioning, I am thinking of versioning the calls instead of the API. For example:
http://api.mydomain.com/callfoo/v2.0/param1/param2/param3
http://api.mydomain.com/verifyfoo/v1.0/param1/param2
instead of first having
http://api.mydomain.com/v1.0/callfoo/param1/param2
http://api.mydomain.com/v1.0/verifyfoo/param1/param2
then going to
http://api.mydomain.com/v2.0/callfoo/param1/param2/param3
http://api.mydomain.com/v2.0/verifyfoo/param1/param2
The advantage I see are:
When the calls change, I do not have to rewrite my entire client - only the parts that are affected by the changed calls.
Those parts of the client that work can continue as is (we have a lot of testing hours invested to ensure both the client and the server sides are stable.)
I can use permanent or non-permanent redirects for calls that have changed.
Backward compatibility would be a breeze as I can leave older call versions as is.
Am I missing something? Please advise.
Require an HTTP header.
Version: 1
The Version header is provisionally registered in RFC 4229 and there some legitimate reasons to avoid using an X- prefix or a usage-specific URI. A more typical header was proposed by yfeldblum at https://stackoverflow.com/a/2028664:
X-API-Version: 1
In either case, if the header is missing or doesn't match what the server can deliver, send a 412 Precondition Failed response code along with the reason for the failure. This requires clients to specify the version they support every single time but enforces consistent responses between client and server. (Optionally supporting a ?version= query parameter would give clients an extra bit of flexibility.)
This approach is simple, easy to implement and standards-compliant.
Alternatives
I'm aware that some very smart, well-intentioned people have suggested URL versioning and content negotiation. Both have significant problems in certain cases and in the form that they're usually proposed.
URL Versioning
Endpoint/service URL versioning works if you control all servers and clients. Otherwise, you'll need to handle newer clients falling back to older servers, which you'll end up doing with custom HTTP headers because system administrators of server software deployed on heterogeneous servers outside of your control can do all sorts of things to screw up the URLs you think will be easy to parse if you use something like 302 Moved Temporarily.
Content Negotiation
Content negotiation via the Accept header works if you are deeply concerned about following the HTTP standard but also want to ignore what the HTTP/1.1 standard documents actually say. The proposed MIME Type you tend to see is something of the form application/vnd.example.v1+json. There are a few problems:
There are cases where the vendor extensions are actually appropriate, of course, but slightly different communication behaviors between client and server doesn't really fit the definition of a new 'media type'. Also, RFC 2616 (HTTP/1.1) reads, "Media-type values are registered with the Internet Assigned Number Authority. The media type registration process is outlined in RFC 1590. Use of non-registered media types is discouraged." I don't want to see a separate media type for every version of every software product that has a REST API.
Any subtype ranges (e.g., application/*) don't make sense. For REST APIs that return structured data to clients for processing and formatting, what good is accepting */* ?
The Accept header takes some effort to parse correctly. There's both an implied and explicit precedence that should be followed to minimize the back-and-forth required to actually do content negotiation correctly. If you're concerned about implementing this standard correctly, this is important to get right.
RFC 2616 (HTTP/1.1) describes the behavior for any client that does not include an Accept header: "If no Accept header field is present, then it is assumed that the client accepts all media types." So, for clients you don't write yourself (where you have the least control), the most correct thing to do would be to respond to requests using the newest, most prone-to-breaking-old-versions version that the server knows about. In other words, you could have not implemented versioning at all and those clients would still be breaking in exactly the same way.
Edited, 2014:
I've read a lot of the other answers and everyone's thoughtful comments; I hope I can improve on this with the benefit of a couple of years of feedback:
Don't use an 'X-' prefix. I think Accept-Version is probably more meaningful in 2014, and there are some valid concerns about the semantics of re-using Version raised in the comments. There's overlap with defined headers like Content-Version and the relative opaqueness of the URI for sure, and I try to be careful about confusing the two with variations on content negotiation, which the Version header effectively is. The third 'version' of the URL https://example.com/api/212315c2-668d-11e4-80c7-20c9d048772b is wholly different than the 'second', regardless of whether it contains data or a document.
Regarding what I said above about URL versioning (endpoints like https://example.com/v1/users, for instance) the converse probably holds more truth: if you control all servers and clients, URL/URI versioning is probably what you want. For a large-scale service that could publish a single service URL, I would go with a different endpoint for every version, like most do. My particular take is heavily influenced by the fact that the implementation as described above is most commonly deployed on lots of different servers by lots of different organizations, and, perhaps most importantly, on servers I don't control. I always want a canonical service URL, and if a site is still running the v3 version of the API, I definitely don't want a request to https://example.com/v4/ to come back with their web server's 404 Not Found page (or even worse, 200 OK that returns their homepage as 500k of HTML over cellular data back to an iPhone app.)
If you want very simple /client/ implementations (and wider adoption), it's very hard to argue that requiring a custom header in the HTTP request is as simple for client authors as GET-ting a vanilla URL. (Although authentication often requires your token or credentials to be passed in the headers, anyway. Using Version or Accept-Version as a secret handshake along with an actual secret handshake fits pretty well.)
Content negotiation using the Accept header is good for getting different MIME types for the same content (e.g., XML vs. JSON vs. Adobe PDF), but not defined for versions of those things (Dublin Core 1.1 vs. JSONP vs. PDF/A). If you want to support the Accept header because it's important to respect industry standards, then you won't want a made-up MIME Type interfering with the media type negotiation you might need to use in your requests. A bespoke API version header is guaranteed not to interfere with the heavily-used, oft-cited Accept, whereas conflating them into the same usage will just be confusing for both server and client. That said, namespacing what you expect into a named profile per 2013's RFC6906 is preferable to a separate header for lots of reasons. This is pretty clever, and I think people should seriously consider this approach.
Adding a header for every request is one particular downside to working within a stateless protocol.
Malicious proxy servers can do almost anything to destroy HTTP requests and responses. They shouldn't, and while I don't talk about the Cache-Control or Vary headers in this context, all service creators should carefully consider how their content is consumed in lots of different environments.
This is a matter of opinion; here's mine, along with the motivation behind the opinion.
include the version in the URL.
For those who say, it belongs in the HTTP header, I say: maybe. But putting in the URL is the accepted way to do it according to the early leaders in the field. (Google, yahoo, twitter, and more). This is what developers expect and doing what developers expect, in other words acting in accordance with the principle of least astonishment, is probably a good idea. It absolutely does not make it "harder for clients to upgrade". If the change in URL somehow represents an obstacle to the developer of a consuming application, as suggested in a different answer here, that developer needs to be fired.
Skip the minor version
There are plenty of integers. You're not gonna run out. You don't need the decimal in there. Any change from 1.0 to 1.1 of your API shouldn't break existing clients anyway. So just use the natural numbers. If you like to use separation to imply larger changes, you can start at v100 and do v200 and so on, but even there I think YAGNI and it's overkill.
Put the version leftmost in the URI
Presumably there are going to be multiple resources in your model. They all need to be versioned in synchrony. You can't have people using v1 of resource X, and v2 of resource Y. It's going to break something. If you try to support that it will create a maintenance nightmare as you add versions, and there's no value add for the developer anyway. So, http://api.mydomain.com/v1/Resource/12345 , where Resource is the type of resource, and 12345 gets replaced by the resource id.
You didn't ask, but...
Omit verbs from your URL path
REST is resource oriented. You have things like "CallFoo" in your URL path, which looks suspiciously like a verb, and unlike a noun. This is wrong. Use the Force, Luke. Use the verbs that are part of REST: GET PUT POST DELETE and so on. If you want to get the verification on a resource, then do GET http://domain/v1/Foo/12345/verification. If you want to update it, do POST /v1/Foo/12345.
Put optional params as a query param or payload
The optional params should not be in the URL path (before the first question mark) unless you are suggesting that those optional params constitute a self-standing resource. So, POST /v1/Foo/12345?action=partialUpdate&param1=123&param2=abc.
Don't do either of those things, because they push the version into the URI structure, and that's going to have downsides for your client applications. It will make it harder for them to upgrade to take advantage of new features in your application.
Instead, you should version your media types, not your URIs. This will give you maximum flexibility and evolutionary ability. For more information, see this answer I gave to another question.
I like using the profile media type parameter:
application/json; profile="http://www.myapp.com/schema/entity/v1"
More Info:
https://www.rfc-editor.org/rfc/rfc6906
http://buzzword.org.uk/2009/draft-inkster-profile-parameter-00.html
It depends on what you call versions in your API, if you call versions to different representations (xml, json, etc) of the entities then you should use the accept headers or a custom header. That is the way http is designed for working with representations. It is RESTful because if I call the same resource at the same time but requesting different representations, the returned entities will have exactly the same information and property structure but with different format, this kind of versioning is cosmetic.
In the other hand if you understand 'versions' as changes in entity structure, for example adding a field 'age' to the 'user' entity. Then you should approach this from a resource perspective which is in my opinion the RESTful approach. As described by Roy Fielding in his disseration ...a REST resource is a mapping from an identifier to a set of entities... Therefore makes sense that when changing the structure of an entity you need to have a proper resource that points to that version. This kind of versioning is structural.
I made a similar comment in: http://codebetter.com/howarddierking/2012/11/09/versioning-restful-services/
When working with url versioning the version should come later and not earlier in the url:
GET/DELETE/PUT onlinemall.com/grocery-store/customer/v1/{id}
POST onlinemall.com/grocery-store/customer/v1
Another way of doing that in a cleaner way but which could be problematic when implementing:
GET/DELETE/PUT onlinemall.com/grocery-store/customer.v1/{id}
POST onlinemall.com/grocery-store/customer.v1
Doing it this way allows the client to request specifically the resource they want which maps to the entity they need. Without having to mess with headers and custom media types which is really problematic when implementing in a production environment.
Also having the url late in the url allows the clients to have more granularity when choosing specifically the resources they want, even at method level.
But the most important thing from a developer perspective, you don't need to maintain the whole mappings (paths) for every version to all the resources and methods. Which is very valuable when you have lot of sub-resources (embedded resources).
From an implementation perspective having it at the level of resource is really easy to implement, for example if using Jersey/JAX-RS:
#Path("/customer")
public class CustomerResource {
...
#GET
#Path("/v{version}/{id}")
public IDto getCustomer(#PathParam("version") String version, #PathParam("id") String id) {
return locateVersion(version, customerService.findCustomer(id));
}
...
#POST
#Path("/v1")
#Consumes(MediaType.APPLICATION_JSON)
public IDto insertCustomerV1(CustomerV1Dto customer) {
return customerService.createCustomer(customer);
}
#POST
#Path("/v2")
#Consumes(MediaType.APPLICATION_JSON)
public IDto insertCustomerV2(CustomerV2Dto customer) {
return customerService.createCustomer(customer);
}
...
}
IDto is just an interface for returning a polymorphic object, CustomerV1 and CustomerV2 implement that interface.
Facebook does verisoning in the url. I feel url versioning is cleaner and easier to maintain as well in the real world.
.Net makes it super easy to do versioning this way:
[HttpPost]
[Route("{version}/someCall/{id}")]
public HttpResponseMessage someCall(string version, int id))

Java's Jersey, RESTful API, and JSONP

This must have been answered previously, but my Google powers are off today and I have been struggling with this for a bit. We are migrating from an old PHP base to a Jersey-based JVM stack, which will ultimately provide a JSON-based RESTful API that can be consumed from many applications. Things have been really good so far and we love the easy POJO-to-JSON conversion. However, we are dealing with difficulties in Cross-Domain JSON requests. We essentially have all of our responses returning JSON (using #Produces("application/json") and the com.sun.jersey.api.json.POJOMappingFeature set to true) but for JSONP support we need to change our methods to return an instance of JSONWithPadding. This of course also requires us to add a #QueryParam("callback") parameter to each method, which will essentially duplicate our efforts, causing two methods to be needed to respond with the same data depending on whether or not there is a callback parameter in the request. Obviously, this is not what we want.
So we essentially have tried a couple different options. Being relatively new to Jersey, I am sure this problem has been solved. I read from a few places that I could write a request filter or I could extend the JSON Provider. My ideal solution is to have no impact on our data or logic layers and instead have some code that says "if there is a call back parameter, surround the JSON with the callback, otherwise just return the JSON". A solution was found here:
http://jersey.576304.n2.nabble.com/JsonP-without-using-JSONWithPadding-td7015082.html
However, that solution extends the Jackson JSON object, not the default JSON provider.
What are the best practices? If I am on the right track, what is class for the default JSON filter that I can extend? Is there any additional configuration needed? Am I completely off track?
If all your resource methods return JSONWithPadding object, then Jersey automatically figures out if it should return JSON (i.e. just the object wrapped by it) or the callback as well based on the requested media type - i.e. if the media type requested by the client is any of application/javascript, application/x-javascript, text/ecmascript, application/ecmascript or text/jscript, then Jersey returns the object wrapped by the callback. If the requested media type is application/json, Jersey returns the JSON object (i.e. does not wrap it with the callback). So, one way to make this work is to make your resource method produce all the above media types (including application/json), always return JSONWithPadding and let Jersey figure out what to do.
If this does not work for you, let us know why it does not cover your use case (at users at jersey.java.net). Anyway, in that case you can use ContainerRequest/ResponseFilters. In the request filter you can modify the request headers any way you want (e.g. adjust the accept header) to ensure it matches the right resource method. Then in the response filter you can wrap the response entity using the JSONWithPadding depending on whether the callback query param is available and adjust the content type header.
So what I ultimately ended up doing (before Martin's great response came in) was creating a Filter and a ResponseWrapper that intercepted the output. The basis for the code is at http://docs.oracle.com/cd/B31017_01/web.1013/b28959/filters.htm
Essentially, the filter checks to see if the callback parameter exists. If it does, it prepends the callback to the outputted JSON and appends the ) at the end. This works great for us in our testing, although it has not been hardened yet. While I would have loved for Jersey to be able to handle it automatically, I could not get it to work with jQuery correctly (probably something on my side, not a problem with Jersey). We have pre-existing jQuery calls and we are changing the URLs to look at the new Jersey Server and we really didn't want to go into each $.ajax call to change any headers or content types in the calls if we didn't have to.
Aside from the small issue, Jersey has been great to work with!

How am I supposed to use Apache XML-RPC's TypeConvertorFactory on the server side?

I am struggling to get Apache XML-RPC to call my TypeConvertorFactory at all. It is calling it on the client side (only return values confirmed to work so far, haven't tested parameters) but the server side isn't calling it at all.
My initialisation code looks like this:
XmlRpcServlet servlet = new XmlRpcServlet();
XmlRpcServerConfigImpl config = new XmlRpcServerConfigImpl();
config.setEnabledForExtensions(true);
servlet.getXmlRpcServletServer().setConfig(config);
servlet.getXmlRpcServletServer().setTypeConverterFactory(
new ServerTypeConvertorFactory());
The API looks like this:
private interface API {
AvailableLicencesResponse availableLicences();
}
AvailableLicencesResponse is just an ordinary boring, albeit deeply nested JavaBean.
At the moment, the server sends this response as Base64 (this works because I called setEnabledForExtensions(true) - but if I don't call that, I get "unexpected end of stream" type errors instead, with no additional error anywhere to tell me why.)
I have set breakpoints inside ServerTypeConvertorFactory and it seems that none of the methods are being called on the server. The same breakpoints show the client calling convert() when the server hands back the result, but the result is already the correct type due to serialisation magic so I don't need to convert it.
Essentially, what I am trying to do is to implement a wire protocol which conforms to standard XML-RPC (and doesn't use huge amounts of Base64 data...) while still having an API which doesn't require a thousand casts and calls to Map.get(). I also want to avoid having to make more calls than necessary, which is why I'm returning a bean instead of having separate methods to get each property of it, which would have worked too.
(Also, if there is a simpler to use API than this one, I'm open for suggestions. It started out being really easy, but that's until you need to customise it. Also, anyone suggesting Axis or other SOAP-based APIs will be shot at dawn.)

Ideal web service protocol for single-operation Java program with many parameters?

I have been tasked with making an existing Java program available via a web service that will be invoked through a PHP form.
The program performs biological simulations and takes a large number of input parameters (around 30 different double, double[], and String fields). These parameters are usually setup as instance variables for an "Input" class which is passed into the the program, which in turn produces an "Output" class (which itself has many important fields). For example, this is how they usually call the program:
Input in = new Input();
in.a = 3.14;
in.b = 42;
in.c = 2.71;
Output out = (new Program()).run(in);
I am looking for a way to turn this into a web service in the easiest way possible. Ideally, I would want it to work like this:
1) The PHP client form formats an array containing the input data and sends it to the service.
2) The Java web service automatically parses this data and uses it to populate an Input instance.
3) The web service invokes the program and produces an Output instance, which is is formatted and sent back to the client.
4) The client retrieves the input formatted as an associative array.
I have already tried a few different methods (such as a SOAP operation and wrapping the data in XML), but nothing seems as elegant as I would like. The problem is that the program's Input variable specifications are likely to change, and I would prefer the only maintenance for such changes to be on the PHP form end.
Unfortunately, I don't have enough experience with web services to make an educated decision on what my setup should be like. Does anyone have any advice for what the most effective solution would be?
IMHO JSON RESFULL will be the best. Look here http://db.tmtec.biz/blogs/index.php/json-restful-web-service-in-java
If your program is stand alone java use jetty as an embedded web server.
If the application is already running as a web application skip this.
The second phase is creating web service. I'd recommend you restful web service. It is easier to implement and maintain than SOAP. You can use XML or JSON format for data. I'd probably recommend you JSON.
Now it is up to you. If you already use some kind of web framework, check whether it supports REST. If you do not use any framework you can implement some kind of REST API using simple servlet that implements doPost(), parses input (JSON) and calls implementation.
There are a lot of possibilities. You should search for data serialization formats and choose the most suitable for this purpose.
One possibility would be to use Protocol Buffers.

Servlet doPost() Method setup?

I am interested in creating a web app that uses JSP, Servlets and XML.
At the moment I have the following:
JSP - Form input.
Servlet - Retrieving Form data and sending that data to a java object.
Java object (1) - Converts data into XML file....instantiates java object (2).
Java object (2) - Sends that file to a database.
On the returning side the database will send back another XML file that I will then process using XSLT to display back to the user.
Can I place that XSLT code in the orignial Servlets doPost() method? So my doPost()` method would:
Retrieve user inputted data from the form on my JSP page.
Instantiate a java object to convert that data to XML, in-turn that object will instantiates another object to send the XML file to a database.
Converts the resulting XML file sent from the database and displays it for the user.
Can one servlet doPost() method handle all of this? If not, how would I set up my application and classes to handle this work flow?
Thank you in advance
I wouldn't load the XSLT in POST, because every method has to do it.
Read that XSTL in the init method, precompile and cache it. Just make sure that you keep it thread safe.
Once you have the XSLT, you've got to apply it to every XML response, so those steps do belong in POST.
All your doPost() method has to do is generate a suitable servlet response (some form of content, and a suitable HTTP response structure). So it can do anything you want (including the above).
However it sounds like your rendering requirement is distinct from your form submission and storage requirement. So I would make your doPost() method delegate to a suitable method for rendering the output. That way you can generate output from stored data separately from submitting data to the database.
Well, this is not really specific to servlets, but more to Java/OOP (object oriented programming) in general. You can in fact do everything in a single method, even in a main() method. But hundreds or more of lines in a single method isn't really readable, maintainable, reuseable nor testable in long terms. Right now, you're probably just starting with Java and you probably don't need to do anything else than this, but if you ever need to duplicate (almost) the same lines of code, then it's time to refactor. Extract the variables from the duplicate code lines and wrap those lines in a new method which takes those variables as arguments and does a simple one-step task.
In general, you'd like to already split the big task in separate subtasks beforehand, using separate and reuseable classes and methods. In your case, you can for example have a single DAO class for all the DB interaction task, a generic XML helper class to convert Javabeans to XML and vice versa with help of XSL and (maybe) a domain object to manage the input/output processing (conversion/validation/errorhandling/response) and executing actions. Write down in paper how the big picture is to be accomplished in small single tasks. Each task can be often as good done by a single method. Group the methods with the same responsibilities and/or the same shared data in the same class.
To go a step further, for several tasks there may be 3rd party tools available which eases the task. I can think of for example XMLBeans and/or XStream to do the Javabean <--> XML conversion. That would already save a lot of boilerplate code and likely also the XSL step.
That said, duffymo's suggestion to load the XSL only once is a very good one. You don't need to re-execute exactly the same task which isn't dependent on request parameters at all again and again on every request, that's only inefficient.

Categories

Resources