How to search a class which can handle specific data in Java? - java

I have multiple classes in my current project like HTMLRequest, SPDYRequest, BHIVERequest. I get data from a network stream and I want to find out, which of the classes can handle this data. For this I read the header of the Stream packet (All protocols are in the same form, so I can read until I get empty line (\r\n) ) and then pass this header to a static function in the request classes which returns a boolean which tells whether it is a header for this kind of request or not. Now I want to be able to load specific Protocols at runtime (from plug-ins). What is the best way to be able to check whether I have a Protocol for a header or not.
My thoughts were:
An extra class Protocol as a singleton, which then is registered in a RequestFactory, that then has to find out which Protocol can create a request for this kind of header and calls Protocol.assemble()
A static List of Class<? extends Request> so I can call the static methods through reflection or by Class.newInstance()
I don't like both that ideas, so what is the right way to dynamically do this stuff in Java?

If it were me, I'd do something similar, but not identical, to option 1:
Read all plugins at startup, storing their headers and a reference to either their class name or an instance (if they're stateless) in a HashMap in a RequestFactory or similarly named utility. You'd also store references to the built-in system protocols in this map.
When a request comes in, make a call to RequestFactory.getProtocol(String header) to grab a reference to the protocol that should be used.
Note that if you go this route, you don't need a method in each protocol class that lets you query whether a header is appropriate for it; the factory handles that for you.
Edit
...I just noticed the "at runtime" part of your question, which makes the "at startup" part of my solution a little out of place. New plugins could still be registered into the factory's internal map as they're recognized/loaded; that shouldn't affect the usefulness of this design as a whole.

Related

Restful way of sending Pre-Post or Pre-Put warning message

I am trying to convert a legacy application to Restful web-services. One of our old form displayed a warning message immediately on form load. This warning message depends on a user property.
For example, if there is a property isInactiveByDefault which when set to "true" will set the status of the newly created employee via POST v1/employees to "Inactive". User on "Employee" form load will see a warning message "Any new employee created will have inactive status".
I originally thought of providing a resource to get the status of the property and let the client handle whether to display the warning message or not based on the value of the property. But my manager wants to avoid any business logic in client side. As per him, this is a business logic and should be handled from server.
I am wondering what's the Restful way of sending this warning message. I could send the message with POST response but this would mean they will receive the message after the action.
I think that delegating the choice to display or not the message to the client is at the very frontier of what one may call "business logic". I believe there is room for debate here..
But let aside this point, and if we consider the message(s) to display as data then as soon as your client can make REST calls (via JS, AJAX or whatever), you should be able to query the server before or while the form loads (and wait/sync on that).
So it is perfectly fine and "RESTful" to perform a GET request on the service to i.e. retrieve a list of warning messages (eventually internationalized) and display them. The server will prepare this list of warnings based on such or such property and return it (as JSON or whatever). Your client will only have to display the parsed result of the request (0-N messages to list) next to your form.
EDIT
Example REST service for your use case:
#POST
#Path("/v1/employees")
public Response someFormHandlingMethod() {
// Your form processing
...
if(EmployeeParameters.isInactiveByDefault()) {
emp.setInactive(true);
}
...
}
#GET
#Path("/v1/employees/formwarnings")
public Response getEmployeeFormWarnings() {
// "Business logic" -> Determine warning messages to return to client
...
if(EmployeeParameters.isInactiveByDefault()) {
warnings.add("Any new employee created will have inactive status");
}
...
// and return them in Response (entity as JSON or whatever). Could also be List<String> for example.
}
It is just an example so that you get the idea but the actual implementation will depend on your Jersey configuration, mappers/converters, annotations (such as #Produces), etc.
The focus of a REST architecture is on the decoupling of clients from servers by defining a set of constraints both parties should adhere to. If those constraints are followed stringent it allows servers to evolve freely in future without having to risk breaking clients that also adhere to the REST architecture while such clients will get more robust toward change as a consequence. As such, a server should teach clients on how certain things need to look like (i.e. through providing forms and form-controls) and what further actions (i.e. links) the client can perform from the current state it is in.
By asking for Pre-Post or Pre-Put operations I guess you already assume an HTTP interaction, which is quite common for so called "RESTful" systems. Though as Jim Webber pointed out, HTTP is just a transport protocol whose application domain is the transfer of a document from a source to a target and any business rules we conclude from that interaction are just a side effect of the actual document management. So we have to come up with ways to trigger certain business activities based on the processing of certain documents. In addition to that, HTTP does not know anything like pre-post or pre-put or the like.
Usually a rule of thumb in distributed computing is to never trust any client input and thus validate the input on the server side again. The first validation solution worked on the server providing a client with a Web form that teaches clients on the available properties of a resource as well as gave clients a set of input controls to interact with that form (i.e. a send button that upon clicking will marshal the input data to a representation format that is sent with a HTTP operation the server supports). A client therefore did not need to know any internals of the server as it was served with all the information it needed to make up the next request. This is in essence what HATEOAS is all about. Once a server received a request on a certain endpoint it started to process the received document, i.e. the input data, and if certain constraints it put on that resource were violated it basically sent the same form as before to the client, though this time with the input data of the client included and further markup that was added to the form representation that indicates the input error. Through style attributes added to that errors these usually stood out from the general page design notably (usually red text and/or box next to the input control producing the violation) and thus allowed for easier processing of the problems.
With the introduction of JavaScript some clever people came up with a way to not only send the form to the client, but also a script that automatically checks the user input in the background and if violated adds the above mentioned failure markup dynamically to the DOM of the current form/page and so inform a user on possible problems with the current input. This however requires that the current representation format, or more formally its media-type, does support dynamic content such as scripts and manipulating the current representation format.
Unfortunately, many representation formats exchanged in typical "REST APIs/services" do not support such properties. A plain JSON document i.e. just defines the basic syntax (i.e. objects are key-value containers in between curly braces, ...) but it does not define a semantics for any of the fields it might contain, it does not even know what a URI is and such. As such, JSON is not a good representation format for in a REST architecture to start with. While HAL/JSON or JSON Hyper-Schema attempt to add support for links and rudimentary semantics, they yet create their own media-types and thus representation formats all along. Other JSON based media types, such as hal-forms, halo+json (halform) and ion, attempt to add form-support. Some of these media-type formats may allow to add restrictions to certain form-elements that automatically end up in a user warning and a block of the actual document transmission if violated, though most of the current media types might yet lack support for client-sided scripts or dynamic content updates.
Such circumstances lead Fielding in one of his famous blog posts to a statement like this:
A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types
Usually the question should not be which media type to support but how many different ones your server/client should support as the more media type representation it may handle the more it will be able to interact with its peer. And ideally you want to reuse the same client for all interactions while the same server should be able to server a multitude of clients, especially ones not under your direct control.
In regards to a "RESTful" solution I would not recommend using custom headers or proprietary message formats as they are usually only understood and thus processable by a limited amount of clients, which is the opposite of what the REST architecture actually aims to achieve. Such messages usually also require a priori knowledge of the server behavior which may lead to problems if the server ever needs to change.
So, long story short, in essence the safest way to introduce "RESTful" (~ one that is in accordance with the constraints the REST architecture put in place) input validation to a resource is by using a form-based representation, that teaches a client on everything it needs to create an actual request, where the request is processed by the server and if certain input constraints are validated leads to returning the form to the client again with a hint on the respective input problem.
If you want to minimize the interaction between client and server due to performance considerations (which are usually not the core problem at hand) you might need to define your own media type that adds script support and dynamic content updates, through similar means such as DOM manipulation or the like. Such a custom media type however should be registered with IANA to allow other implementors to add support for such types and thus allow other clients to interact with your system in future.

Need a JAVA API that return different responses to different client's [POST REST call] [duplicate]

After having read a lot of material on REST versioning, I am thinking of versioning the calls instead of the API. For example:
http://api.mydomain.com/callfoo/v2.0/param1/param2/param3
http://api.mydomain.com/verifyfoo/v1.0/param1/param2
instead of first having
http://api.mydomain.com/v1.0/callfoo/param1/param2
http://api.mydomain.com/v1.0/verifyfoo/param1/param2
then going to
http://api.mydomain.com/v2.0/callfoo/param1/param2/param3
http://api.mydomain.com/v2.0/verifyfoo/param1/param2
The advantage I see are:
When the calls change, I do not have to rewrite my entire client - only the parts that are affected by the changed calls.
Those parts of the client that work can continue as is (we have a lot of testing hours invested to ensure both the client and the server sides are stable.)
I can use permanent or non-permanent redirects for calls that have changed.
Backward compatibility would be a breeze as I can leave older call versions as is.
Am I missing something? Please advise.
Require an HTTP header.
Version: 1
The Version header is provisionally registered in RFC 4229 and there some legitimate reasons to avoid using an X- prefix or a usage-specific URI. A more typical header was proposed by yfeldblum at https://stackoverflow.com/a/2028664:
X-API-Version: 1
In either case, if the header is missing or doesn't match what the server can deliver, send a 412 Precondition Failed response code along with the reason for the failure. This requires clients to specify the version they support every single time but enforces consistent responses between client and server. (Optionally supporting a ?version= query parameter would give clients an extra bit of flexibility.)
This approach is simple, easy to implement and standards-compliant.
Alternatives
I'm aware that some very smart, well-intentioned people have suggested URL versioning and content negotiation. Both have significant problems in certain cases and in the form that they're usually proposed.
URL Versioning
Endpoint/service URL versioning works if you control all servers and clients. Otherwise, you'll need to handle newer clients falling back to older servers, which you'll end up doing with custom HTTP headers because system administrators of server software deployed on heterogeneous servers outside of your control can do all sorts of things to screw up the URLs you think will be easy to parse if you use something like 302 Moved Temporarily.
Content Negotiation
Content negotiation via the Accept header works if you are deeply concerned about following the HTTP standard but also want to ignore what the HTTP/1.1 standard documents actually say. The proposed MIME Type you tend to see is something of the form application/vnd.example.v1+json. There are a few problems:
There are cases where the vendor extensions are actually appropriate, of course, but slightly different communication behaviors between client and server doesn't really fit the definition of a new 'media type'. Also, RFC 2616 (HTTP/1.1) reads, "Media-type values are registered with the Internet Assigned Number Authority. The media type registration process is outlined in RFC 1590. Use of non-registered media types is discouraged." I don't want to see a separate media type for every version of every software product that has a REST API.
Any subtype ranges (e.g., application/*) don't make sense. For REST APIs that return structured data to clients for processing and formatting, what good is accepting */* ?
The Accept header takes some effort to parse correctly. There's both an implied and explicit precedence that should be followed to minimize the back-and-forth required to actually do content negotiation correctly. If you're concerned about implementing this standard correctly, this is important to get right.
RFC 2616 (HTTP/1.1) describes the behavior for any client that does not include an Accept header: "If no Accept header field is present, then it is assumed that the client accepts all media types." So, for clients you don't write yourself (where you have the least control), the most correct thing to do would be to respond to requests using the newest, most prone-to-breaking-old-versions version that the server knows about. In other words, you could have not implemented versioning at all and those clients would still be breaking in exactly the same way.
Edited, 2014:
I've read a lot of the other answers and everyone's thoughtful comments; I hope I can improve on this with the benefit of a couple of years of feedback:
Don't use an 'X-' prefix. I think Accept-Version is probably more meaningful in 2014, and there are some valid concerns about the semantics of re-using Version raised in the comments. There's overlap with defined headers like Content-Version and the relative opaqueness of the URI for sure, and I try to be careful about confusing the two with variations on content negotiation, which the Version header effectively is. The third 'version' of the URL https://example.com/api/212315c2-668d-11e4-80c7-20c9d048772b is wholly different than the 'second', regardless of whether it contains data or a document.
Regarding what I said above about URL versioning (endpoints like https://example.com/v1/users, for instance) the converse probably holds more truth: if you control all servers and clients, URL/URI versioning is probably what you want. For a large-scale service that could publish a single service URL, I would go with a different endpoint for every version, like most do. My particular take is heavily influenced by the fact that the implementation as described above is most commonly deployed on lots of different servers by lots of different organizations, and, perhaps most importantly, on servers I don't control. I always want a canonical service URL, and if a site is still running the v3 version of the API, I definitely don't want a request to https://example.com/v4/ to come back with their web server's 404 Not Found page (or even worse, 200 OK that returns their homepage as 500k of HTML over cellular data back to an iPhone app.)
If you want very simple /client/ implementations (and wider adoption), it's very hard to argue that requiring a custom header in the HTTP request is as simple for client authors as GET-ting a vanilla URL. (Although authentication often requires your token or credentials to be passed in the headers, anyway. Using Version or Accept-Version as a secret handshake along with an actual secret handshake fits pretty well.)
Content negotiation using the Accept header is good for getting different MIME types for the same content (e.g., XML vs. JSON vs. Adobe PDF), but not defined for versions of those things (Dublin Core 1.1 vs. JSONP vs. PDF/A). If you want to support the Accept header because it's important to respect industry standards, then you won't want a made-up MIME Type interfering with the media type negotiation you might need to use in your requests. A bespoke API version header is guaranteed not to interfere with the heavily-used, oft-cited Accept, whereas conflating them into the same usage will just be confusing for both server and client. That said, namespacing what you expect into a named profile per 2013's RFC6906 is preferable to a separate header for lots of reasons. This is pretty clever, and I think people should seriously consider this approach.
Adding a header for every request is one particular downside to working within a stateless protocol.
Malicious proxy servers can do almost anything to destroy HTTP requests and responses. They shouldn't, and while I don't talk about the Cache-Control or Vary headers in this context, all service creators should carefully consider how their content is consumed in lots of different environments.
This is a matter of opinion; here's mine, along with the motivation behind the opinion.
include the version in the URL.
For those who say, it belongs in the HTTP header, I say: maybe. But putting in the URL is the accepted way to do it according to the early leaders in the field. (Google, yahoo, twitter, and more). This is what developers expect and doing what developers expect, in other words acting in accordance with the principle of least astonishment, is probably a good idea. It absolutely does not make it "harder for clients to upgrade". If the change in URL somehow represents an obstacle to the developer of a consuming application, as suggested in a different answer here, that developer needs to be fired.
Skip the minor version
There are plenty of integers. You're not gonna run out. You don't need the decimal in there. Any change from 1.0 to 1.1 of your API shouldn't break existing clients anyway. So just use the natural numbers. If you like to use separation to imply larger changes, you can start at v100 and do v200 and so on, but even there I think YAGNI and it's overkill.
Put the version leftmost in the URI
Presumably there are going to be multiple resources in your model. They all need to be versioned in synchrony. You can't have people using v1 of resource X, and v2 of resource Y. It's going to break something. If you try to support that it will create a maintenance nightmare as you add versions, and there's no value add for the developer anyway. So, http://api.mydomain.com/v1/Resource/12345 , where Resource is the type of resource, and 12345 gets replaced by the resource id.
You didn't ask, but...
Omit verbs from your URL path
REST is resource oriented. You have things like "CallFoo" in your URL path, which looks suspiciously like a verb, and unlike a noun. This is wrong. Use the Force, Luke. Use the verbs that are part of REST: GET PUT POST DELETE and so on. If you want to get the verification on a resource, then do GET http://domain/v1/Foo/12345/verification. If you want to update it, do POST /v1/Foo/12345.
Put optional params as a query param or payload
The optional params should not be in the URL path (before the first question mark) unless you are suggesting that those optional params constitute a self-standing resource. So, POST /v1/Foo/12345?action=partialUpdate&param1=123&param2=abc.
Don't do either of those things, because they push the version into the URI structure, and that's going to have downsides for your client applications. It will make it harder for them to upgrade to take advantage of new features in your application.
Instead, you should version your media types, not your URIs. This will give you maximum flexibility and evolutionary ability. For more information, see this answer I gave to another question.
I like using the profile media type parameter:
application/json; profile="http://www.myapp.com/schema/entity/v1"
More Info:
https://www.rfc-editor.org/rfc/rfc6906
http://buzzword.org.uk/2009/draft-inkster-profile-parameter-00.html
It depends on what you call versions in your API, if you call versions to different representations (xml, json, etc) of the entities then you should use the accept headers or a custom header. That is the way http is designed for working with representations. It is RESTful because if I call the same resource at the same time but requesting different representations, the returned entities will have exactly the same information and property structure but with different format, this kind of versioning is cosmetic.
In the other hand if you understand 'versions' as changes in entity structure, for example adding a field 'age' to the 'user' entity. Then you should approach this from a resource perspective which is in my opinion the RESTful approach. As described by Roy Fielding in his disseration ...a REST resource is a mapping from an identifier to a set of entities... Therefore makes sense that when changing the structure of an entity you need to have a proper resource that points to that version. This kind of versioning is structural.
I made a similar comment in: http://codebetter.com/howarddierking/2012/11/09/versioning-restful-services/
When working with url versioning the version should come later and not earlier in the url:
GET/DELETE/PUT onlinemall.com/grocery-store/customer/v1/{id}
POST onlinemall.com/grocery-store/customer/v1
Another way of doing that in a cleaner way but which could be problematic when implementing:
GET/DELETE/PUT onlinemall.com/grocery-store/customer.v1/{id}
POST onlinemall.com/grocery-store/customer.v1
Doing it this way allows the client to request specifically the resource they want which maps to the entity they need. Without having to mess with headers and custom media types which is really problematic when implementing in a production environment.
Also having the url late in the url allows the clients to have more granularity when choosing specifically the resources they want, even at method level.
But the most important thing from a developer perspective, you don't need to maintain the whole mappings (paths) for every version to all the resources and methods. Which is very valuable when you have lot of sub-resources (embedded resources).
From an implementation perspective having it at the level of resource is really easy to implement, for example if using Jersey/JAX-RS:
#Path("/customer")
public class CustomerResource {
...
#GET
#Path("/v{version}/{id}")
public IDto getCustomer(#PathParam("version") String version, #PathParam("id") String id) {
return locateVersion(version, customerService.findCustomer(id));
}
...
#POST
#Path("/v1")
#Consumes(MediaType.APPLICATION_JSON)
public IDto insertCustomerV1(CustomerV1Dto customer) {
return customerService.createCustomer(customer);
}
#POST
#Path("/v2")
#Consumes(MediaType.APPLICATION_JSON)
public IDto insertCustomerV2(CustomerV2Dto customer) {
return customerService.createCustomer(customer);
}
...
}
IDto is just an interface for returning a polymorphic object, CustomerV1 and CustomerV2 implement that interface.
Facebook does verisoning in the url. I feel url versioning is cleaner and easier to maintain as well in the real world.
.Net makes it super easy to do versioning this way:
[HttpPost]
[Route("{version}/someCall/{id}")]
public HttpResponseMessage someCall(string version, int id))

Allowing maximal flexibly/extensibility using a factory

I have a little design issue on which I would like to get some advice:
I have several classes that inherit from the same base class, each one can accept the same data and analyze it in a slightly different way.
Analyzer
|
˪_ AnalyzerA
|
˪_ AnalyzerB
...
I have an input file (I do not have control over the file's format) that defines which analyzers should be invoked and their parameters. Plus it defines data-extractors in the same way and other similar things too (in similar I mean that this is an action that can have several variations).
I have a module that iterates over different analyzers in the file and calls some factory that constructs the correct analyzer. I have a factory for each of the archetypes the input file can define and so far so good.
But what if I want to extend it and to add a new type of analyzer?
The solution I was thinking about is using a property file for each factory that will be named after the factories name and it will hold a mapping between the input file's definition of whatever it wants me to execute and the actual classes that I use to execute the action.
This way I could load that class at run-time -> verify that it's implementing the right interface and then execute it.
If some John Doe would like to create his own analyzer he'd just need to add a new property to the correct file (I'm not quite sure what would be the best strategy to allow this kind of property customization).
So in short:
Is my solution too flawed?
If no what would be the most user friendly/convenient way to allow customization of properties?
P.S
Unfortunately I'm confined to using only build in JDK classes as the existing solution, so I can't just drop in SF on them.
I hope this question is not out of line I'm just not used to having my wings clipped this way, not having SF or some other to help me implement an elegant solution.
Have a look at the way how the java.sql.DriverManager.getConnection(connectionString) method is implemented. The best way is to watch the source code.
Very rough summary of the idea (it is hidden inside a lot of private methods). It is more or less an implementation of chain of responsibility, although there is not linked list of drivers.
DriverManager manages a list of drivers.
Each driver must register itself to the DriverManager by calling its method registerDriver().
Upon request for a connection, the getConnection(connectionString) method sequentially calls the drivers passing them the connectionString.
Each driver KNOWS if the given connection string is within its competence. If yes, it creates the connection and returns it. Otherwise the control is passed to the next driver.
Analogy:
drivers = your concrete Analyzers
connection strings = types of your files to be analyzed
Advantages:
There is no need to explicitly bind the analyzers with their type of file they are meant for. Let the analyzer to decide itself if it is able to analyze the file. If not, null is returned (or an exception or whatever) to tell the AnalyzerManager that the next analyzer in the row should be asked.
Adding new analyzer just means adding a new call to the register() method. Complete decoupling.

Java's Jersey, RESTful API, and JSONP

This must have been answered previously, but my Google powers are off today and I have been struggling with this for a bit. We are migrating from an old PHP base to a Jersey-based JVM stack, which will ultimately provide a JSON-based RESTful API that can be consumed from many applications. Things have been really good so far and we love the easy POJO-to-JSON conversion. However, we are dealing with difficulties in Cross-Domain JSON requests. We essentially have all of our responses returning JSON (using #Produces("application/json") and the com.sun.jersey.api.json.POJOMappingFeature set to true) but for JSONP support we need to change our methods to return an instance of JSONWithPadding. This of course also requires us to add a #QueryParam("callback") parameter to each method, which will essentially duplicate our efforts, causing two methods to be needed to respond with the same data depending on whether or not there is a callback parameter in the request. Obviously, this is not what we want.
So we essentially have tried a couple different options. Being relatively new to Jersey, I am sure this problem has been solved. I read from a few places that I could write a request filter or I could extend the JSON Provider. My ideal solution is to have no impact on our data or logic layers and instead have some code that says "if there is a call back parameter, surround the JSON with the callback, otherwise just return the JSON". A solution was found here:
http://jersey.576304.n2.nabble.com/JsonP-without-using-JSONWithPadding-td7015082.html
However, that solution extends the Jackson JSON object, not the default JSON provider.
What are the best practices? If I am on the right track, what is class for the default JSON filter that I can extend? Is there any additional configuration needed? Am I completely off track?
If all your resource methods return JSONWithPadding object, then Jersey automatically figures out if it should return JSON (i.e. just the object wrapped by it) or the callback as well based on the requested media type - i.e. if the media type requested by the client is any of application/javascript, application/x-javascript, text/ecmascript, application/ecmascript or text/jscript, then Jersey returns the object wrapped by the callback. If the requested media type is application/json, Jersey returns the JSON object (i.e. does not wrap it with the callback). So, one way to make this work is to make your resource method produce all the above media types (including application/json), always return JSONWithPadding and let Jersey figure out what to do.
If this does not work for you, let us know why it does not cover your use case (at users at jersey.java.net). Anyway, in that case you can use ContainerRequest/ResponseFilters. In the request filter you can modify the request headers any way you want (e.g. adjust the accept header) to ensure it matches the right resource method. Then in the response filter you can wrap the response entity using the JSONWithPadding depending on whether the callback query param is available and adjust the content type header.
So what I ultimately ended up doing (before Martin's great response came in) was creating a Filter and a ResponseWrapper that intercepted the output. The basis for the code is at http://docs.oracle.com/cd/B31017_01/web.1013/b28959/filters.htm
Essentially, the filter checks to see if the callback parameter exists. If it does, it prepends the callback to the outputted JSON and appends the ) at the end. This works great for us in our testing, although it has not been hardened yet. While I would have loved for Jersey to be able to handle it automatically, I could not get it to work with jQuery correctly (probably something on my side, not a problem with Jersey). We have pre-existing jQuery calls and we are changing the URLs to look at the new Jersey Server and we really didn't want to go into each $.ajax call to change any headers or content types in the calls if we didn't have to.
Aside from the small issue, Jersey has been great to work with!

Preferred method for REST-style URL's?

I am creating a web application that incorporates REST-style services and I wanted some clarification as to the preferred (standard) method of how the POST requests should be accepted by my Java server side:
Method 1:
http://localhost:8080/services/processser/uid/{uidvalue}/eid/{eidvalue}
Method 2:
http://localhost:8080/services/processuser
{uid:"",eid:""} - this would be sent as JSON in the post body
Both methods would use the "application/json" content-type, but are there advantages, disadvantages to each method. One disadvantage to method 2, I can immediately think of is that the JSON data, would need to be mapped to a Java Object, thus creating a Java object any time any user access the "processuser" servlet api. Your input is much appreciated.
In this particular instance, the data would be used to query the database, to return a json response back to the client.
I think we need to go back a little from your question. Your path segment starts with:
/services/processuser
This is a mistake. The URI should identify a resource, not an operation. This may not be always possible, but it's something you should strive for.
In this case, you seem to identify your user with a uid and an eid (whatever those are). You could build paths such as a user is referred to by /user/<uid>/<eid>, /user/<uid>-<eid> (if you must /user/uid/<uid>/eid/<eid>); if eid is a specialization, and not on equal footing with uid, then /user/<uid>;eid=<eid> would be more appropriate.
You would create new users by posting to /user/ or /user/<uid>/<eid> if you knew the identifiers in advance, deleting users by using DELETE on /user/<uid>/<eid> and change state by using PUT on /user/<uid>/<eid>.
So to answer your question, you should use PUT on /user/<uid>/<eid> if "processuser" aims to change the state of the user with data you provide. Otherwise, the mapping to the REST model is not so clean, possibly the best option would be to define a resource /user/process/<uid>/<eid> and POST there with all the data, but a POST to /user/process with all the data would be more or less the same, since we're already in RPC-like camp.
For POST requests, Method 2 is usually preferred, although often the resource name will be pluralized, so that you actually post to:
http://localhost:8080/services/processusers
This is for creating new records, however.
It looks like you're really using what most RESTful services would use a GET request for (retrieving a record), in which case, Method 1 is preferred.
Edit:
I realize I didn't source my answer, so consider the standards set by Rails. You may or may not agree that it is a valid standard.

Categories

Resources