Hosted message queue for Java-based app in AWS us-east? - java

I'm looking for a message queue as a service that ..
.. is hosted in AWS us-east
.. offers real PubSub (not polling!)
.. can be used in production
.. offers high availability
.. has a good client for Java
I only found CloudAMQP (still in beta), AppEngine Task Queue (not AWS), SQS (polling only), Redis To Go (no high availability? -twitter stream seems full of issues) and IronMQ (polling only).
What am I missing?

You should check one of the open PaaS available (Such as Cloudify, OpenShift or Cloudfoundry), using such PaaS one can easily on board most of the services and most of the popular message queues like - ActiveMQ, RabitMQ or SonicMQ.
Cloudify (Which I'm one of its contributor) is open source and free, can on board lamosy any message queue you want on any cloud.
You can easily on board ActiveMQ, RabitMQ, SonicMQ or another service you used to work with off the cloud.

Looks like Iron.io have added pub/sub. Maybe it fits your needs now? Also, it appears to talk beanstalkd, so you're potentially free to migrate easily to a self hosted solution at some point in the future (should you feel that urge!).

Have you tried pure messaging solutions? http://www.pubnub.com/faq or http://pusher.com ? According to their websites they have presence on EC2.

Related

How would I implement an embedded SFTP Server on Openshift

Background Context:
Due to enterprise limitations, an uncooperative 3rd party vendor, and a lack of internal tools, this approach has been deemed most desirable. I am fully aware that there are easier ways to do this, but that decision is a couple of pay grades away from my hands, and I'm not about to fund new development efforts out of my own pocket.
Problem:
We need to send an internal file to an external vendor. The team responsible for these types of files only transfers with SFTP, while our vendor only accepts files via REST API calls. The idea we came up with (considering the above constraints) was to use our OpenShift environment to host a "middle-man" SFTP server (running from a jar file) that will hit the vendor's API after our team sends it the file.
I have learned that if we want to get SFTP to work with OpenShift we need to set up of our cluster and pods with an ingress/external IP. This looks promising, but due to enterprise bureaucracy, I'm waiting for the OpenShift admins to make the required changes before I can see if this works, and I'm running out of time.
Questions:
Is this approach even possible with the technologies involved? Am I on the right track?
Are there other configuration options I should be using instead of what I explained above?
Are there any clever ways in which an SFTP client can send a file via HTTP request? So instead of running an embedded SFTP server, we could just set up a web service instead (this is what our infrastructure supports and prefers).
References:
https://docs.openshift.com/container-platform/4.5/networking/configuring_ingress_cluster_traffic/configuring-externalip.html
https://docs.openshift.com/container-platform/4.5/networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-service-external-ip.html#configuring-ingress-cluster-traffic-service-external-ip
That's totally possible, I have done it in the past as well with OpenShift 3.10. The approach to use externalIPs is the right way.

DEADLINE_EXCEEDED when publishing to a Cloud Pub/Sub topic from Compute Engine

I have a Java application running in a Google Compute Engine instance. I am attempting to publish a message to a Cloud Pub/Sub topic using the google-cloud library, and I am getting DEADLINE_EXCEEDED exceptions. The code looks like this:
PubSub pubSub = PubSubOptions.getDefaultInstance().toBuilder()
.build().getService();
String messageId = pubSub.publish(topic, message);
The result is:
com.google.cloud.pubsub.PubSubException: io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED
The documentation suggests that this response is typically caused by networking issues. Is there something I need to configure in my Networking section to allow Compute Engine to reach Pub/Sub? The default-allow-internal firewall rule is present.
I have already made my Compute Engine service account an editor and publisher in the Pub/Sub topic's permissions.
The application resides in a Docker container within a Container Engine-managed Compute Engine instance. The Pub/Sub topic and the Compute Engine instance are in the same project. I am able to use the google-cloud library to connect to other Cloud Platform services, such as Datastore. I am also able to publish to the same Pub/Sub topic without fail from App Engine instances in the same project.
Would I have more luck using the google-api-services-pubsub API library instead of google-cloud?
I have the same problem at the moment and created an issue at the google-cloud-java issue tracker on GitHub since I couldn't find it there.
We switched from the old google-api-services-pubsub libraries (which worked) to the new ones and got the exception. Our Java application is also running on a Compute Engine instance.
While this can be caused by networking issues (the client cannot connect to the service), the more typical cause is publishing too fast. It is common to call the publish method in a tight loop which can create of thousands to hundreds of thousands within the time it takes a typical request to return. The network stack on a machine will only send so many requests at a time, while the others sit waiting for execution. If your machine is able to send N parallel requests and each request takes 0.1s, in a minute you can send 600N requests. If you publish at a faster rate than that, all additional requests will time out on the client with DEADLINE_EXCEEDED.
You can confirm this by looking at server side metrics in Cloud Monitoring: you will not see these requests and you will only see successful requests. The rate of those requests will tell you the throughput capacity of your machines.
The solution to this is Publisher flow control: limit how fast you are calling the publish method, effectively. You can do this in most client libraries through simple configuration. Please refer to the documentation for the client library publisher API for you client library for details. E.g. in Java, this is a property called FlowControlSettings of the Publisher BatchingSettings. In Python, this is set directly in the PublisherOptions.

JMS Broker Options for CloudBees?

Does CloudBees offer any JMS hosting or any 3rd party JMS hosting (like IronMQ)? After perusing their Partner Ecosystem pages and developers docs, I don't see anything of the sort. I'd like to deploy a web app to CloudBees but will need messaging, and something like RabbitMQ or ActiveMQ.
I know I could always ship my WAR with an embedded instance of ActiveMQ running inside of it, but that kind of defeats the purpose of scalability in my mind: the harder the queues are working, the slower my app is going to become, and it would be nice to delegate the messaging work to a broker residing on another machine.
Also, it would be most sweet if such a messaging service had a free tier like so many of the other CloudBees tech partners offer...
Any ideas? Thanks in advance!
There is no official partner (yet) to provide messaging service, but you for sure can use a SaaS MQ service from your cloudbees application, even not being an official partner, for sample http://www.cloudamqp.com/ to get RabbitMQ -as-a-service, Iron.io or Amazon SQS. The only consideration is to ensure this service is hosted on Amazon, so your application won't suffer network latencies to access the MQ broker - most SaaS run on AWS anyway
free tier option is another consideration, depends on partner business model... (CloudAMQP has one)

communication between jruby app and java app that are on different servers

Anyone has expirience on having Jruby project running on Jboss (using torquebox or whatever) with an ability to communicate with another "japps" not on the same jboss where jruby app is, i.e. some java project on another jboss?
I know there is an torque-messanging but dunno if it's possible to communicate with external(out of jruby-app's jboss) app?
Best practices are welcomed.
Thanks in advance.
P.S. placing that other app on the jboss where jruby app is not acceptible solution.
I can recommend you to use Thrift and build communication via them.
Thrift have generator for both your needed languages (Java and JRuby) and provide good and fast communication.
UPDATED:
Thrift is RPC (remote procedure call) framework developed at Facebook. In detail you can read about it in Wiki.
In few word to save you time, what it is and how to use it:
You describe you data structures and service interface in .thrift file(files). And generate from this file all needed source files(with all need serialization) for one or few languages(what you need). Than you can simple create server and client in few lines
Using it inside client will be looks like you just use simple class.
With Thrift you can use what protocol and transport used.
In most cases uses Binary or Compact protocol via Blocked or Not-blocked transport. So network communication will be light and fast + with fast serialization.
SOAP(based on XML on HTTP) packages, its in few times bigger, and inappropriate for sending binary data, but not only this. Also XML-serialization is very slow. So with SOAP you receive big overhead. Also with soap you need to write (or use third-party) lib for calling server(tiny network layer), thrift already made it for you.
SMTP and basically JMS is inappropriate for realtime and question-answer communication.
I mean if you need just to put some message in queue and someone sometime give this message and process it — you can (and should) use JMS or any other MQ services(Thrift can do this to, but MQ architecture is better for this issue).
But if you need realtime query-answer calls, you should use RPC, as protocol it can be HTTP(REST, SOAP), binary(Thrift, ProtoBuf, JDBC, etc) or any other.
Thrift (and ProtoBuf) provide framework for generate client and server, so it incapsulate you from low level issues.
P.S:
I made some example in past https://github.com/imysak/using-thrift (communication via Thrift Java server + Java Client or node.js client), maybe it will be useful for someone . But you can found more simple and better examples.
Torquebox supports JMS. The gem you specified torquebox-messaging allows for publishing and processing of HornetQ messages on the local JBoss AS server/cluster that the JRuby app is running in. I don't think it currently supports connecting to remote servers.
Using this functionality in your JRuby app you could then configure your Java app on another server to communicate with HornetQ running in the JBoss AS that the JRuby app is running on.
Alternatively you could always implement your own communication protocol or use another Java library - you have access to anything Java you want to run from JRuby.
You can use Web Services or JMS for that

Java server scaling solution

stackoverflow! We are developing system, which should be horizontal scalable. So, messaging system seems to be the right approach, but it very low-level. Our main requirement is persistent connections between clients and server system (clients are mobile applications communicating with server by xml-based protocol). The next very important task is work distribution based on node current load. Now we are using legacy application based on Apache Mina framework, but it is not scalable. So, what architecture will be sufficient and what libraries or frameworks do you know to solve our problems?
Work distribution should be based on task length, which could be variable.
Every application server in cluster should be able to send message to client at any time without request from client (push).
And how about Hazelcast or GridGain?

Categories

Resources