stackoverflow! We are developing system, which should be horizontal scalable. So, messaging system seems to be the right approach, but it very low-level. Our main requirement is persistent connections between clients and server system (clients are mobile applications communicating with server by xml-based protocol). The next very important task is work distribution based on node current load. Now we are using legacy application based on Apache Mina framework, but it is not scalable. So, what architecture will be sufficient and what libraries or frameworks do you know to solve our problems?
Work distribution should be based on task length, which could be variable.
Every application server in cluster should be able to send message to client at any time without request from client (push).
And how about Hazelcast or GridGain?
Related
Background Context:
Due to enterprise limitations, an uncooperative 3rd party vendor, and a lack of internal tools, this approach has been deemed most desirable. I am fully aware that there are easier ways to do this, but that decision is a couple of pay grades away from my hands, and I'm not about to fund new development efforts out of my own pocket.
Problem:
We need to send an internal file to an external vendor. The team responsible for these types of files only transfers with SFTP, while our vendor only accepts files via REST API calls. The idea we came up with (considering the above constraints) was to use our OpenShift environment to host a "middle-man" SFTP server (running from a jar file) that will hit the vendor's API after our team sends it the file.
I have learned that if we want to get SFTP to work with OpenShift we need to set up of our cluster and pods with an ingress/external IP. This looks promising, but due to enterprise bureaucracy, I'm waiting for the OpenShift admins to make the required changes before I can see if this works, and I'm running out of time.
Questions:
Is this approach even possible with the technologies involved? Am I on the right track?
Are there other configuration options I should be using instead of what I explained above?
Are there any clever ways in which an SFTP client can send a file via HTTP request? So instead of running an embedded SFTP server, we could just set up a web service instead (this is what our infrastructure supports and prefers).
References:
https://docs.openshift.com/container-platform/4.5/networking/configuring_ingress_cluster_traffic/configuring-externalip.html
https://docs.openshift.com/container-platform/4.5/networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-service-external-ip.html#configuring-ingress-cluster-traffic-service-external-ip
That's totally possible, I have done it in the past as well with OpenShift 3.10. The approach to use externalIPs is the right way.
Anyone has expirience on having Jruby project running on Jboss (using torquebox or whatever) with an ability to communicate with another "japps" not on the same jboss where jruby app is, i.e. some java project on another jboss?
I know there is an torque-messanging but dunno if it's possible to communicate with external(out of jruby-app's jboss) app?
Best practices are welcomed.
Thanks in advance.
P.S. placing that other app on the jboss where jruby app is not acceptible solution.
I can recommend you to use Thrift and build communication via them.
Thrift have generator for both your needed languages (Java and JRuby) and provide good and fast communication.
UPDATED:
Thrift is RPC (remote procedure call) framework developed at Facebook. In detail you can read about it in Wiki.
In few word to save you time, what it is and how to use it:
You describe you data structures and service interface in .thrift file(files). And generate from this file all needed source files(with all need serialization) for one or few languages(what you need). Than you can simple create server and client in few lines
Using it inside client will be looks like you just use simple class.
With Thrift you can use what protocol and transport used.
In most cases uses Binary or Compact protocol via Blocked or Not-blocked transport. So network communication will be light and fast + with fast serialization.
SOAP(based on XML on HTTP) packages, its in few times bigger, and inappropriate for sending binary data, but not only this. Also XML-serialization is very slow. So with SOAP you receive big overhead. Also with soap you need to write (or use third-party) lib for calling server(tiny network layer), thrift already made it for you.
SMTP and basically JMS is inappropriate for realtime and question-answer communication.
I mean if you need just to put some message in queue and someone sometime give this message and process it — you can (and should) use JMS or any other MQ services(Thrift can do this to, but MQ architecture is better for this issue).
But if you need realtime query-answer calls, you should use RPC, as protocol it can be HTTP(REST, SOAP), binary(Thrift, ProtoBuf, JDBC, etc) or any other.
Thrift (and ProtoBuf) provide framework for generate client and server, so it incapsulate you from low level issues.
P.S:
I made some example in past https://github.com/imysak/using-thrift (communication via Thrift Java server + Java Client or node.js client), maybe it will be useful for someone . But you can found more simple and better examples.
Torquebox supports JMS. The gem you specified torquebox-messaging allows for publishing and processing of HornetQ messages on the local JBoss AS server/cluster that the JRuby app is running in. I don't think it currently supports connecting to remote servers.
Using this functionality in your JRuby app you could then configure your Java app on another server to communicate with HornetQ running in the JBoss AS that the JRuby app is running on.
Alternatively you could always implement your own communication protocol or use another Java library - you have access to anything Java you want to run from JRuby.
You can use Web Services or JMS for that
I need to push events to web clients in a cross-browser manner (iPhone, iPad, Android, IE/FF/Chrome/etc.) from a Spring based Java server. I am using backbone.js on the client side.
To my best knowledge, I can either go with a Web socket only approach, or I can use something like socket.io.
What is the best practice for this issue, and which platform/frameworks should I use?
Thanks
Looks like you're interested in an AJAX Push engine. ICEPush (same group that makes ICEFaces) provides these capabilities, and works with a variety of server- and client-side frameworks. There is also APE.
You can have a look at Lightstreamer.
My company is currently using it to push real time financial data from a web server.
I suppose Grizzly or Netty may fit your needs. Don't have a real experience in that scope, unfortunately.
I'd recommend socket.io as you mentioned in your question, if you're doing browser based eventing from a remote host. Socket.io handles all the connection keep-alives and reconnections directly from javascript and has facilities for channeling messages to specific sessions (users). The real advantage comes from the two-way communication of WebSockets without all the boilerplate code of maintaining the connection.
You will need to do some digging for a java implementation thoughConsider running the server directly from V8.
In a very popular ecommerce store, I'd imagine the actual processing of the credit card would be moved to some sort of dedicated application server, and made into more of a asynchronous process.
What sort of java application type would that be? i.e. a service that would take a message of the queue, and start processing the request and update some db table once finished.
In .net, I guess one would use a windows service. What would you use in the java world?
It is typically a J2EE application that uses a HTTP web service interface or a JMS messaging interface. HTTP interfaces are accessible via a URL, and JMS connects to a queue to pick up messages that are sent to it. The app can run on any one of the major commercial (WebSphere, Weblogic, Oracle) or free (Glassfish, JBoss) servers.
In Java you already have great open source projects that do all this for you like Glassfish, Tomcat etc.
For a mission critical system, you might want something like IBM MQ series as the middleware, and a straight Java application that uses the MQ interface to process the requests.
At a few banks that I know of, this is their architecture. Originally the application servers were written in C, as was the middleware. They were able to switch to java because the code that was actually doing the critical work (sending and receiving messages, assuring guaranteed delivery, protecting against interruptions if a component went down) were the IBM MQ's.
In our case we use an application server from Sybase that can house Java components. They are pretty much standard Java classes that have public methods that are exposed for calling via CORBA. Components can also be scheduled to run constantly or on a schedule (like a service) to look for work to do (via items in a database table, an Oracle AQ queue, or a JMS queue). All of this is contained in the app server and the app server provides transaction management, resource management, and database connection pooling for us.
Or use an OSGI environment.
I have to design a distributed application composed by one server (developed in Java) and one or more remote GUI clients (Swing application with windows).
As stated before the clients are Swing GUI application that can connect to the server in order to receive and send data.
The communication is bidirectional (Server <=> Clients).
Data sent over the network is mainly composed by my domain logic objects.
Two brief examples: a client calls the server in order to receive data to populate a table inside a window; the server calls client in order to send data to refresh a specific widget (like a button).
The amount of data transmitted between server and clients and the frequency of the network calls are not particularly high.
Which technology do you suggest me for the server-clients communication?
I've in mind one technology suitable for me but I would like to know your opinions.
Thanks a lot.
The first technology that came to my mind was RMI - suitable if you're communicating between java client and java server. But you may get difficulties if you want do switch the client technology to - say - a webinterface.
I would go with RMI but implement the whole architecture using Spring framework. This way it is independent of technology used and can be switched to other ways of communication (such as HTTP or other ) with almost no coding.
UPDATE: And Spring will allow you to have none of RMI specific code.
I believe sockets should do the trick. They are flexible and not especially hard to code/maintain. Most entry level programmer should also be able to maintain them. They are also fast and adapt to any kind of environment.
Unless, your server is going to be off-site or you expect to have firewall issues. In that case, web services are the way to go since your basic communication happens through port 80.
I would second msparer's suggestion of RMI, except I would just use EJB3 (which uses RMI as the communication protocol). EJB3 are very easy and even if you don't use the other feaures EJB gives you (e.g., security) you can still leverage Container Managed Transactions (CMT). It really does make development easy.
As for the server->client communication, you would probably want to use JMS. Again, using EJB3 this is pretty e3asy to do with annotations. The clients will subscribe to the message service and receive update notifications from the server.
And yes, I am currently working on an application that does this very thing. Unfortunately we are using EJB2.1. Still, it is my opinion that this is where EJBs really shine. Using EJBs in a web app is frequently overkill, but in a distributed client/server app they work very well.
You can try using ICE http://www.zeroc.com for establishing server-client connection.