I have two servlets A & B.
On B i intend to have a a method isAvailable() which A will call to check the status. If this method returns true then im going to pass it an object to B.
On doing a bit of reading i'm seeing a couple of options non of which im that familar with. JNDI with remote EJB , RMI or simple HTTP (not sure how youd do the last)
What do you guys think ? Any other options ?
Why not make use of the fact that your infrastructure is already talking HTTP ?
So servlet A can perform an HTTP GET on a particular path to check a status (either sending back an object or checking an HTTP response code - this latter method seems a misuse of status codes, however), and PUT/POST an object if required. I note that you're running across multiple hosts, and this will work in your scenario.
The objects can be serialised using standard Java, or via a representation such as XML - perhaps serialised using XStream).
That would seem to me a pretty straightforward way to leverage off the infrastructure you have.
Are your servlets running in the same application server? If so, you might like to use Spring to inject B into A so that the method can be called directly.
Even if the servlets are running in different containers, you can expose them (using Spring again) as Remote objects and similarly inject B into A (except that this will mean that the Spring container will inject a proxy for the remote object). This has zero footprint in your code (i.e. it's all defined by config files and Spring takes care of everything for you)
It looks like this isAvailable() method in Servlet B accesses some kind of "global" data which is stored in the Servlet. Could you extract this object to a separate Singleton which then is available for both Servlets?
There is one instance of Servlet A on a master host and many of Servlet B each on its own host with its own tomcat instances.
You can use java.net.URLConnection to programmatically fire a HTTP request. You can find here a simple tutorial.
Let A fire a HTTP request to B and have in B a servlet which listens on those requests and returns the response accordingly. This can be a simple response.getWriter().write("ok"); or so. You can even return a XML string and so on. In A you can then read this value from the InputStream of the URLConnection.
Related
I want response from one webservice call to be used later on by some other webservice call. How do I implement the code flow for it. Do I have to maintain a session?
I am creating the restful webservices using Spring,java
If user 1 calls an endPoint /getUserRecord and 1 minute later calls /checkUserRecord which uses data from the first call, how to handle it since user 2 can call /getUserRecord before user 1 calls /checkUserRecord
I am using Spring java to create RESTFul webservices.
Thanks,
Arpit
technically you can pass UserRecord from get to check.
GET /userrecords/ --> return the one or more record
POST /checkUserRecord with the record as you want to check as request body.
But I strongly advise you to not do this. Data provided by client are unreliable and cannot be trust by your backend code. What if some javascript has altered the original data ? Besides, what if you have a list of data or heterogenous payload to pass back and forth, it would end up to a complete mess of payload exchanges from client and server.
So as Veselin Davidov said, you should probably stick with clean stateless REST paradigm and rely on identifier:
so
GET /userrecords/ --> [ { "id": 1, "data":"myrecorddata"}, ...]
GET /checkUserRecord/{id} like /checkUserRecord/1
and yes, you will have to make two calls to the database. If your concern is performance, you can set some caching mecanism as piy26 points out, but caching could lead you to other issues (how to define a proper and reliable caching strategy ?).
Unless you manage a tremendous amount of data, I think you should first focus on providing a clear, maintainable and safely usable REST API with stateless design.
If you are using Spring Boot, it provides a way to enable caching on your Repository object which is what you are trying to solve.
#EnableCaching
org.springframework.cache.annotation.EnableCaching
You can use UserID as a hash attribute while creating your key so that response from different users remains unique.
I have an interesting problem.
We have a number of EJB's that are called both by local code (via local interface) and by client code (via remote interface).
The code runs on Weblogic 12c servers and use RMI for method invocations.
The system is in development already for many years, and along others implements browser functionality around user defined cursors (a kind of handle for a result set). There are already many calls to obtain such a cursor for various data types.
When the cursor is obtained it is used subsequently to request the underlying data (another call).
In our case we want to know whether the call is done from local code or from a remote client. We want to know this so we can preload the first n items, and thus reduce the number of calls to our server. Each call has an overhead of about 20ms, which we want to avoid.
The remote client code is generic (the cursor is wrapped in a kind of list) and could easily be adjusted to handle the preloaded data.
The local callers also call these EJB methods to obtain a cursor, but usually use other functionality to handle the cursor (wrapping in iterators, joins, etc). So they would become a lot more complex if they had to handle the preloading (and they often do not need it).
So we want to make an interceptor to do preloading of data in the cursor, but only if the call is made from a remote client. So far we could not find a way of doing so.
I tried RemoteServer.getClientHost() but it always throws the exception there is no connection.
I searched if the SessionContext could be extended with a field/value to be set by the caller to identify the remote client, but could find anything about doing this. (We have a homemade wrapper for the service interface which could be extended of inserting such information in a context).
So the question is:
Is there a generic way to find out in an EJB interceptor that the origin of the call was from a different system
If the remote client uses any kind of authentication there should be some info in the security context about the principal which can be used to differentiate. Otherwise, before finding a better solution new Throwable().getStackTrace() returns an array of all callers. There must be a method upstream that could tell if the call is local or it's been done via remote call.
I'm just getting into Spring (and Java), and despite quite a bit of research, I can't seem to even express the terminology for what I'm trying to do. I'll just explain the task, and hopefully someone can point me to the right Spring terms.
I'm writing a Spring-WS application that will act as middleware between two APIs. It receives a SOAP request, does some business logic, calls out to an external XML API, and returns a SOAP response. The external API is weird, though. I have to perform "service discovery" (make some API calls to determine the valid endpoints -- a parameter in the XML request) under a variety of situations (more than X hours since last request, more than Y requests since last discovery, etc.).
My thought was that I could have a class/bean/whatever (not sure of best terminology) that could handle all this service discovery stuff in the background. Then, the request handlers can query this "thing" to get a valid endpoint without needing to perform their own discovery and slow down request processing. (Service discovery only needs to be re-performed rarely, so it would be impactful to do it for every request.)
I thought I had found the answer with singleton beans, but every resource says those shouldn't have state and concurrency will be a problem -- both of which kill the idea.
How can I create an instance of "something" that can:
1) Wake up at a defined interval and run a method (i.e. to check if Service discovery needs to be performed after X hours and if so do it).
2) Provide something like a getter method that can return some strings.
3) Provide a way in #2 to execute a method in the background without delaying return (basically detect that an instance property exceeds a value and execute -- or I suppose, issue a request to execute -- an instance method).
I have experience with multi-threaded programming, and I have no problem using threads and mutexes. I'm just not sure that's the proper way to go in Spring.
Singletons ideally shouldn't have state because of multithreading issues. However, it sounds like what you're describing is essentially a periodic query that returns an object describing the results of the discovery mechanism, and you're implementing a cache. Here's what I'd suggest:
Create an immutable (value) object MyEndpointDiscoveryResults to hold the discovery results (e.g., endpoint address(es) or whatever other information is relevant to the SOAP consumers).
Create a singleton Spring bean MyEndpointDiscoveryService.
On the discovery service, save an AtomicReference<MyEndpointDiscoveryResults> (or even just a plain volatile variable). This will ensure that all threads see updated results, while limiting them to a single, atomically updated field containing an immutable object limits the scope of the concurrency interactions.
Use #Scheduled or another mechanism to run the appropriate discovery protocol. When there's an update, construct the entire result object, then save it into the updated field.
I had two java web apps with servlets that communicated with each other using absolute URLs. I did this because they wouldn't always be running on the same server and even when they were it still worked fine.
However, I've now changed the code so there is only a single web app and the first app (A) now uses the second app (B) more as a library. So, I'd like to be able to make calls on the second app directly without having to know the full URL.
Ideally, the servlets in B would be pure controllers, but unfortunately they are not and the logic is wrapped up in the request and the response and not easily decoupled.
The only option I've seen is to use a RequestDispatcher. However, when getting a dispatcher using the context:
context.getRequestDispatcher("/servlet/mapping");
To make the call I'd then need to synthesize a request and response object and I don't know how to do this. I've looked into wrappers but they need to start from something yet I don't have a starting point. I could create my own wrapper to handle the query parameters, but again, I don't have a request to start from.
Or is there a simpler solution that I'm missing? Thanks!
Just turn all your request.getParameter() type calls into regular passing of variables to a function. Assuming your servlet writes out text and not binary, turn all your out.prints into concatenations to a string you return at the end of the servlet. Or pass in an outputstream rather than response object, and instead of calling the servlet response.getouputstream use the outputstream passed in. Then you can make your servlet just a regular class that takes some parameters and returns a string or prints to an outputstream that you passed in.
I am currently using a #POST web service to retrieve data.
My idea, at the beginning, was to pass a map of parameters. Then my function, on the server side, would take care of reading the needed parameters in the map and return the response.
This is to prevent having a big number of function almost identical on the server side.
But if I understood correctly, #POST should be use for creation of content.
So my question: Is it a big programming mistake to use #POST for data retrieval?
Is it better to create 1 web service per use case, even if it is a lot?
Thanks.
Romain.
POST is used to say that you are submitting data.
GET requests can be bookmarked, POST can't. Before there were single page web appliations we used post-redirect-get to accepta data submission and display a bookmakable page.
If you use POST to retrieve data then web-caching doesn't work, because the caching code doesn't cache POSTS, it expects POST to mean it needs to invalidate its cache. If you split your services out use-case-wise and use GET then you can have something like Squid cache the responses.
You may not need to implement caching right now, but it would be good to keep the option open. Making your services act in a compliant way means you can get leverage from existing tools and infrastructure (which is a selling point of REST).
doGet();
Called by the server (via the service method) to allow a servlet to handle a GET request.
doPost()
Called by the server (via the service method) to allow a servlet to handle a POST request.
No issues with them.Both will handle your request.