calling a method in which way? [closed] - java

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I want to make connection between two parts of my program which can be located in departed places.
I have some choices for making this connection:
using PRC/RMI: in every request a calling method will send to second part
using normal function call
using queue(in memory):every request will be placed in a queue and second part will get that and answer requests
using queue(in DB):like number 3 but in DB
using socket for sending data(TCP/IP or UDP or ...)
using web service
can anyone compare these available ways?

Here are my thoughts:
RPC/RMI - Requires the RMI/IIOP protocol over the wire, which limits you to using Java for both the client and the server.
Normal function call means both objects are resident in the same JVM and cannot be distributed. This will be the fastest option for a single method call. Reusing that object means having to package it in a JAR and redistribute it to all the other apps that need it. Now you've got to know where all those JARs are if the code changes. Distribution is an issue.
Asynchronous processing, but you'll have to write all the queue and handling code. This would take strong multi-threading skills. Could be very fastest of all, because it's all in memory and would allow parallel processing if you had multiple cores. It's also the most dangerous, because you have to be thread-safe.
Don't understand why you'd have the queue in a database. I'd prefer a Java EE app server for doing this. Not all RDBMS have queues running inside them. If you agree and go with JMS, this will be asynchronous and distributed and robust. It'll allow topics or queues, which can be flexible. But it'll be slower than the others.
Using a socket is just like RMI, except you have to write the entire protocol. Lots of work.
A web service will be similar to RMI in performance. But it'll use HTTP as the protocol, which means that any client that can formulate an HTTP request can call it. REST or SOAP will give you flexibility about message choices (e.g., XML, JSON, etc.)
Synchronous calls mean the caller and callee are directly coupled. The interface has to be kept constant.
Asynchronous calls mean looser coupling between the caller and callee. Like the synchronous case, the messages have to be relatively stable.
UPDATE: The picture you added makes the problem murkier. The interaction with 3rd party merchant and card handlers makes your error situation dicier. What happens if one of those fails? What if either one is unavailable? If the bank fails, how do you communicate that back to the 3rd parties? Very complicated, indeed. You'll have bigger problems than just choosing between RMI and web services.

There's quite a lot to answer there. Briefly:
If your program is split up into a client/server, you can't use a normal method call.
Queuing would possibly work if your method calls are one-way (have no return value). However if you want to return values synchronously then what do you do ? Have two queues - one for outgoing and one for incoming results ? And what do you do re. exceptions. It's not a natural fit.
Web services will work. However they're often used for bridging between client/servers on different platforms and written in different languages, so it may be a lot of unnecessary work here. The same applies to CORBA, btw.
A TCP socket solution would work, but requires quite a lot of extra work to set up (if you want to invoke separate methods etc.). Note (also) that TCP and UDP are fundamentally different and for reliability purposes I wouldn't use UDP normally for this sort of stuff.
RMI is pretty straightforward to set up in Java, and that would probably be a good first step. Check out the RMI tutorial here.

Option (2) will only work if both parts of your program are running in the same JVM.
Options (3) and (4) only work if you don't mind the calls being asynchronous, i.e. not returning a result directly to the caller.
Options (5) and (6) take a lot of work to set up, you'd be better off with (1).

Related

Best approach to create a webapp which updates the UI dynamically on his own rest service invocation by an external client [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have a webapp which exposes a rest endpoint. Clients post data to this endpoint. I need to display the content they post in my webapp, without refreshing the UI.
My first approach was to save the posted data on a database and to have an ajax call which will continuously seek for the new data to display. This adds an overhead, because I don't need to save what I receive.
Secondly, I came across web sockets where a client and server can have full duplex communication.
Is there any better way of doing this?
P.S: My rest endpoints are developed using spring boot.
Generally there are three ways to get the client updated on a server event: polling, long-polling, and server-side push. They all have their pros and cons:
Polling
Polling is the easiest way of implementing this. You just need a timer on the client side and that's it. Repeatedly query the server for updates and reuse the code you already have.
Especially if you have many clients, the server may get flooded by a large amount GET requests. This may impose computational as well as network overhead. You can mitigate this by using a caching proxy on the server side (either as part of your application or as a separate artifact/service/node/cluster). Caching GET requests is normally quite easy.
Polling may not seem to be the most elegant solution but in many cases it is good enough. In fact polling can be considered the "normal" rest-full way to do this. HTTP specifies mechanisms like the if-modified-since header which are used to improve polling.
Long-Polling
Long-polling works by making the GET request a blocking operation. The client makes a request and the server blocks until there is new data. Then the request (which might have been made long ago) is answered.
This dramatically reduces the network load. But there are several drawbacks: First of all you can easily get this wrong. For example when you combine this approach with server-side pooling of session beans your bean pool can get used up quite fast and you have a wonderful denial-of-service.
Furthermore long-polling does not work well with certain firewall configurations. Some firewalls may decide that this TCP connection has been quiet for too long and regards it as aborted. It then may silently discard any data belonging to the connection.
Caching-proxies and other intermediaries may also not like long-polling -- although I have no concrete experience I can share here.
Although I spent quite some time writing about the drawbacks, there are cases when long-polling is the best solution. You just need to know what you are doing.
Server-Side Push
The server can also directly inform the clients about a change. Websockets are a standard which details this approach. You can use any other API for establishing TCP connections but in many cases websockets are the way to go. Essentially a TCP connection is left open (just like in long-polling) and the server uses it to push changes to the client.
This approach is on a network-level similar to the long-polling approach. Because of that it also shares some of the drawbacks. For example you can get the same firewall issues. This is one of the reasons why websocket endpoints should send heartbeats.
In the end it depends on your concrete requirements which solution is best. I'd recommend using a simple polling mechanism if you are fine with polling every ten seconds or less frequently and if this doesn't get you into trouble with battery usage or data transmission volume on the client (e.g. you are building a smartphone app). If polling is not sufficient, consider the other two options.

SOAP to REST conversion : Fresh or reuse? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
My team has been tasked with converting our application's existing SOAP API to REST. My preference is to re-write the API from scratch (reusing only the actual business logic that each operation performs). Others in my team want to just have REST as a wrapper over the existing SOAP. Meaning we would expose REST services but when a request comes in our application would internally call the existing SOAP operations.
Could you please offer suggestions on which of these is the best approach? It looks like my way is cleaner and lighter and faster but their way allows some code re-use.
It depends what is your priority and weather you are going to receive too many requests for changes in behavior of API.
Ample time and more changes expected:
If you have got time, of course writing from scratch is recommended as
it would mean cleaner, lighter and faster. This will also make
shipping new features easy.
Less time and less changes expected. API too big to do regression testing:
BUT if you have time constraints, I would suggest go with REST over
SOAP api. Anyways you are going to expose only REST api to client, so
you can do internal refactoring and phasing out of SOAP as and when
time permits you. Changing whole code means regression testing of
entire module
Could you please offer suggestions on which of these is the best
approach? It looks like my way is cleaner and lighter and faster but
their way allows some code re-use.
I wrote a framework that does the SOAP -> REST conversion. It was used internally in one of the companies I used to work for. The framework was capable of doing this with a mapping file in less than 10 minutes, but we did not use it for all services. Here's why...
Not all services (WSDL based) are designed with REST in mind. Some of them are just remote methods being invoked on a service and nothing more.
The service may not have resources that can be mapped.
Even if there are resources they may not map correctly to REST (GET / POST etc) and some of the calls are not easily translatable.
A mapping framework has an overhead of it's own. The framework's SLA was quite low (single digit millis), but even a small overhead may not be suitable for critical services. The time it takes to profile and get this overhead down should not be underestimated.
In summary, the approach works for some services but it takes some engineering effort to get there. It makes sense to do this if you have say 500+ services that need to be converted quickly in a short span of time, temporarily.
The fact that you would have to convert your REST calls to SOAP calls in order to reuse your current code definitely suggests a rewrite to me!
To make testing, debugging, and profiling easier, you should have a pure POJO based API that is called by your REST API.
What back end server are you using? As several of the more popular Java web servers have tooling that makes creating REST APIs really easy.

Creating Java Program which run without fail [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am new to zookeeper, Apache curator and need your help to design a prorgram:
I need to create a java program, that will run a script every hour (based on cron expression provided by end user).
Consider I have 3 servers, I need to make sure the script runs every hour without failure even in case of a server is down (in this case script must run on other server). Every hour script will be running only on one server.
I have to create an interface to provide input the this java program. Input will be (i) Script to be run and (ii) Cron expression to schedule script.
1) Please suggest an idea how can I design my program to achieve this. How zookeeper, Apache curator can be used in the same.
2) Is there any way to cache the script on these 3 servers that end-user provide to run?
Can Apache curator's NodeCache be used to cache the script on these 3 servers?
Your response will be highly appreciated.
With three servers, where one is to run no matter what, you need a distributed approach. The problem is that in the event of failures, you might not be able to solve the puzzle of whether to run the script or not.
For a start, you can just have one computer connect to others and tell them not to run. This is called a "hold down" approach; but, it has a lot of issues when you can't connect to the other computers. The problems are that most starting programmers fail to really understand the changes a network environment makes on how they need to design programs. Please take a little time to read over the typical fallacies of distributed computing.
Chron solves this by not caring what happens on other computers, so chron has the wrong design goals.
With three computers, you will also have three different clocks, with their own speeds and times. A good distributed solution will have some concept of time that doesn't directly rely on each machine's clock.
Distributed solutions (if they are to tolerate faults or failures) must be able to run without reliable communication to the other machines. Sometimes the group gets split in half, where one group of machines cannot communicate to the other group. In many cases, both group will perform the "critical" action in fear that the other group didn't. It other cases, both groups might not perform the "critical" action assuming that the other group did. A good solution will ensure that the "critical action" is performed once, even when the computers cannot communicate. Often this is done by "majority" where your group (quorum) cannot perform a critical action if you don't have access to at least a majority of the involved machines.
Look at the Paxos algorithim to get an idea of the issues; and, once you are more aware of the problems, look back at your chosen technologies to determine which parts of the problems they are attempting to solve considering the "fallacies of distributed computing". Also realize that a perfect, 100% correct solution might not be possible; because, the pre-selected machine(s) to run the script might suffer a network failure, and then a power failure in sequence in such a manner that the up machines just assume there's only a network outage.
This is an interview question, right? If yes, be aware that this answer only gets you partway.
The simplest solution is to have all three servers running, and attempt to acquire a lock to perform the processing. See http://zookeeper.apache.org/doc/trunk/recipes.html#sc_recipes_Locks
To ensure that only one server runs the job, you will need to record the last execution time. This is simply "store a value with known key," and you'll find it in one of the intro tutorials.
Of course, if this is an interview question, the interviewer will ask follow-on questions such as "what happens if the script fails halfway through?" or "what if the computers don't have the same time?" You won't (easily) solve either of those problems with ZooKeeper.

Is CAMEL or other Enterprise integration framework more suitable for this usecase? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
My application needs to work as middleware(MW) where it has got orders(in form of XML) from various customers which contains the --------------Priority 1
supplier id where custmers can send the XML to one of these components
1)JMS queue
2)File system
3)HTTP
4)Web service request(rest/soap)
This MW will first validate the incoming request and send the acknowledgement to customers who requested order
over their preferred channels. Channel and customer end point info exists in the incoming XML.
Once it gets the order, it needs to send order requests to different suppliers over their preffered channels in the form of xml.
I have the supplier and their preferred channel info in my db.
So its a Enterprise Integration usecase.
I was planning to do it using core java technologies. here is the approach i was planning.
Will have four listener/entry endpoint for each type of incoming request (JMS queue, File system, HTTP, Web service request(rest/soap)).
These listeners will put the will put the xml string on jms queue. This it will work as receptionist and make the process asynchronous.
Now i will have jms consumer which will listen on queue.(Consumer can be on same system or different as producer depending on load on
producer machine). This consumer will parse the xml string to java objects. Perform the validation. Send the acknowledgement to
customers(acknowledgement needs to be send based on customer preference. I will be using acknowledgement processor factory which will send
the acknowledgement based on preference). Once validation is done, convert this pojo to another pojo format so xstream/jaxb further
marshal it to xml format and send to suppliers on their preferred channel(supplier preference is stored in db) like thru soap,jms,file request etc.
Some i came across this CAMEL link http://java.dzone.com/articles/open-source-integration-apache and looks like its provides the perfect solution
and found this is the Enterprise Integration usecase.
Experts please advise is Camel the right solution for this. Or some other Enterprise integration fraework like Spring integration, ESB
will be more benefecial in this case.If somebody can point me to the resource where ESB solves this kind of usecase. It would be really helpful.
I could not explore all solution as because of time constraint so looking for expert suggestion so that can concentrate on one.
Something like Camel is completely appropriate for this task.
Things like Camel provide toolsets and components that make stitching together workflows like you describer easier, with the caveat that you must learn the overall tool (i.e. Camel, in this case), first.
For a skilled, experience developer, and simple use cases, you can see where they might take the approach that you're taking. Provisioning the workflow with tools at hand, including, perhaps, custom code, rather than taking the time to learn a new tool.
Recall while tools can be a great benefit (features, testing, quality, documentation), they also bring a burden (support, resources, complexity). A key aspect of bringing tool sets in to your environment is that while you may not have written the code, you are still ultimately responsible for it's behavior in your environment.
So, all that said, you need to ascertain whether the time investment of incorporating a tool like Camel is worth the benefit to your current project. Odds are that if you intend to continue and do more integrations in the future, investing in such a tool would be a good idea, as the tool will make those integration easier.
But be conscious that something like Camel, which is quite flexible, also brings along with it inherent complexity. But for simple stuff like what you're talking about, I think it's a solid fit.

What are performant, scalable ways to resolve links from http://bit.ly [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Given a series of URLS from a stream where millions could be bit.ly, google or tinyurl shortened links, what is the most scalable way to resolve those to get their final url?
A Multi-threaded crawler doing HEAD requests on each short link while caching ones you've already resolved? Are there services that already provide this?
Also factor in not getting blocked from the url shortening service.
Assume the scale is 20 million shortened urls per day.
Google provides an API. So does bit.ly (and bit.ly asks to be notified of heavy use, and specify what they mean by light usage). I am not aware of an appropriate API for tinyurl (for decoding), but there may be one.
Then you have to fetch on the order of 230 URLs per second to keep up with your desired rates. I would measure typical latencies for each service and create one master actor and as many worker actors as you needed so the actors could block on lookup. (I'd use Akka for this, not default Scala actors, and make sure each worker actor gets its own thread!)
You also should cache the answers locally; it's much faster to look up a known answer than it is to ask these services for one. (The master actor should take care of that.)
After that, if you still can't keep up because of, for example, throttling by the sites, you had better either talk to the sites or you'll have to do things that are rather questionable (rent a bunch of inexpensive servers at different sites and farm out the requests to them).
Using HEAD method is an interesting idea by I am afraid it can fail because I am not sure the services you mentioned support HEAD at all. If for example the service is implemented as a java servlet it can implement doGet() only. In this case doHead() is unsupported.
I'd suggest you to try to use GET but do not read the whole response. Read HTTP status line only.
As far as you have very serious requirements for performance you cannot these requests synchronously, i.e. you cannot use HttpUrlConnection. You should use NIO package directly. In this case you will be able to send requests to all millions of destinations using only one thread and get responses very quickly.

Categories

Resources