I Know the theoretical difference like :
Amazon MQ provides a managed message broker service that takes care of
operating ActiveMQ, including broker set up, monitoring, maintenance,
and provisioning the underlying infrastructure for high availability
and durability. You may want to consider Amazon MQ when you want to
offload operational overhead and associated costs.
What i am asking is there any difference in making connection, fetching data with active MQ and Amazon MQ.(On coding side mainly in java)
I think there is no difference.
No there is no difference on the code side. Both of them uses same underlying framework and libraries.
Only difference you might see on the code side is amazonMQ is SSL secured which makes it mandate to pass username and password in the code.
AmazonMQ works on version equal to or greater than <5.15.8> version for activemq related dependencies.
Other differences are obviously with the kind of services AWS provides like better monitoring, logs analysis, configuration management/version management of config etc.
Related
Consider this scenario: I have N>2 software components (microservices) that can communicate through two different communication protocols depending on how they are deployed. In other words, I have two deployment scenarios:
The components are to be deployed on the same machine. In this case I don't know if it makes sense to use HTTP to communicate these two components, if I think about performance. I understand that there are more efficient ways to communicate two processes on the same machine using java, such as sockets, RMI, RPC ...
The components are to be deployed on N different machines. In this case, it seems to me that it makes sense for me to use HTTP to communicate these components.
In short, what I want to do is to be able to configure the communication protocol depending on the way I perform the deployment: On a single machine, for example, use RMI, but when I deploy on two machines, use HTTP.
Does anyone know how I can do this using Spring Boot?
Many Thanks!
Fundamental building block of protocols like RMI or HTTP is socket communication. If you are not looking for the comfort of HTTP or RMI, and priority is performance, pure socket communication is your choice.
This will raise other concerns like, deployment difficulties. You should know IP address of both nodes in advance.
Another option, is to go for unix -domain socket for within server communication. For that you have to depend on JunixSocket.
If you want to go another route, check all inter process communication options.
EDIT
As you said in comment "It is simply no longer a question of two components but of many". In that scenario, each component should be a micro-service And should be capable to interact with each other. If that is the choice most scalable protocol are REST/RPC both are using HTTP protocol under the hood. REST is ideal solution for an API to be developed against a data source using CRUD operations. RPC is more lean towards action oriented API. You can find more details to identify the difference in between REST and RPC here.
How I understand this is...
if the components (producer and consumer) are deployed on the same host then use an optimized protocol and if on different hosts then use HTTP(s)
Firstly, there must be a serious driver to go down this route. I take it the driver here is performance. You would like to offer faster performance on local deployment and compartively compromised speeds on distributed deployments. BTW, given that we are in a distributed deployment world (or atleast where we are headed) HTTP will be what will survive. Custom protocols are discouraged.
Anyways... I would say your producer application should be in a self healing / discovery mode. On start-up (or periodically) it could check the health of the "optimized" end-point and decide whether it the optimized receiver is around. The receiver would need to stand behind a load-balancer. If the receiver is not up then go towards HTTP(S) and setup this instance accordingly at runtime.
For the consumer, it would need to keep both the gates (HTTP and optimized) open. It should be ready to handle requests from either channel.
In SpringBoot you can have a healthCheck implmented and switch the emitter on/off depending on the health of optimized end-point. If both end-points are unhealthy then surely the producer cannot emit anything. Apart from this the rest is just normal dependency-injection.
My project is looking to deploy a new j2ee application to Amazon's cloud. ElasticBeanstalk supports Tomcat apps, which seems perfect. Are there any particular design considerations to keep in mind when writing said app that might differ from just a standalone tomcat on a server?
For example, I understand that the server is meant to scale automatically. Is this like a cluster? Our application framework tends to like to stick state in the HttpSession, is that a problem? Or when it says it scales automatically, does that just mean memory and CPU?
Automatic scaling on AWS is done via adding more servers, not adding more CPU/RAM. You can add more CPU/RAM manually, but it requires shutting down the server for a minute to make the change, and then configuring any software running on the server to take advantage of the added RAM, so that's not the way automatic scaling is done.
Elastic Beanstalk is basically a management interface for Amazon EC2 servers, Elastic Load Balancers and Auto Scaling Groups. It sets all that up for you and provides a convenient way of deploying new versions of your application easily. Elastic Beanstalk will create EC2 servers behind an Elastic Load Balancer and use an Auto Scaling configuration to add more servers as your application load increases. It handles adding the servers to the load balancer when they are ready to receive traffic, and removing them from the load balancer and deleting the extra servers when they are no longer needed.
For your Java application running on Tomcat you have a few options to handle horizontal scaling well. You can enable sticky sessions on the Load Balancer so that all requests from a specific user will go to the same server, thus keeping the HttpSession tied to the user. The main problem with this is that if a server is removed from the pool you may lose some HttpSessions and cause any users that were "stuck" to that server to be logged out of your application. The solution to this is to configure your Tomcat instances to store sessions in a shared location. There are Tomcat session store implementations out there that work with AWS services like ElastiCache (Redis) and DynamoDB. I would recommend using one of those, probably the Redis implementation if you aren't already familiar with DynamoDB.
Another consideration for moving a Java application to AWS is that you cannot use any tools or libraries that rely on multi-cast. You may not be using multi-cast for anything, but in my experience every Java app I've had to migrate to AWS relied on multi-cast for clustering and I had to modify it to use a different clustering method.
Also, for a successful migration to AWS I suggest you read up a bit on VPCs, private IP versus public IP, and Security Groups. A solid understanding of those topics is key to setting up your network so that your web servers can communicate with your DB and cache servers in a secure and performant manner.
I'm looking for a message queue as a service that ..
.. is hosted in AWS us-east
.. offers real PubSub (not polling!)
.. can be used in production
.. offers high availability
.. has a good client for Java
I only found CloudAMQP (still in beta), AppEngine Task Queue (not AWS), SQS (polling only), Redis To Go (no high availability? -twitter stream seems full of issues) and IronMQ (polling only).
What am I missing?
You should check one of the open PaaS available (Such as Cloudify, OpenShift or Cloudfoundry), using such PaaS one can easily on board most of the services and most of the popular message queues like - ActiveMQ, RabitMQ or SonicMQ.
Cloudify (Which I'm one of its contributor) is open source and free, can on board lamosy any message queue you want on any cloud.
You can easily on board ActiveMQ, RabitMQ, SonicMQ or another service you used to work with off the cloud.
Looks like Iron.io have added pub/sub. Maybe it fits your needs now? Also, it appears to talk beanstalkd, so you're potentially free to migrate easily to a self hosted solution at some point in the future (should you feel that urge!).
Have you tried pure messaging solutions? http://www.pubnub.com/faq or http://pusher.com ? According to their websites they have presence on EC2.
What is a good solution for communication via message broker that supports both (C)Python and Java/JMS applications? My particular requirements are:
open source solution
Available on Linux-based systems
No rendezvous between sender and receiver required (i.e. uses a message broker)
Multiple producers and consumers supported for a single event queue (only one consumer receives each message)
Unit of work support with two-phase commit (XA support nice to have)
Support for persistent messages (i.e. that survive a restart of the broker)
Supports JMS for Java clients
No component is "fringe", meaning at risk of falling out of maintenance due to lack of community support/interest
If there is a Python client that manages to "speak JMS" that would be awesome, but an answer including a task to write my own Python JMS layer is acceptable
I have had a surprisingly hard time finding a solution for this. Apache's ActiveMQ has no Python support out of the box. ZeroMQ requires a rendezvous. RabbitMQ does not appear to support JMS. The best candidate I have found is a combination of ActiveMQ and the pyactivemq library. But the first and last release of pyactivemq was in 2008, so it would appear that that fails my "no fringe" requirement.
The ideal answer will be the names of one or more well-supported and well-documented open source packages, that you have personally used to communicate between a Java/JMS and Python application, and that do not require a lot of integration work to get started. An answer that includes an "easy" (up to a few days of work) implementation of additional glue code to meet all the requirements above, would be acceptable. A commercial solution, in the absence of a good open source candidate, would be acceptable also.
Also, Jython is out. (If only I could...) The same Python applications will need to use modules only available in CPython.
JMS is a specification not implementation . RabbitMQ is a really option .
I have also happily used HornetQ http://www.jboss.org/hornetq from Jboss as with every thing it is more aligned with every thing Java EE but RabbitMQ would be choice espcially if you are using Spring as well
I have had a surprisingly hard time finding a solution for this.
Apache's ActiveMQ has no Python support out of the box.
ActiveMQ brokers fully support using the Stomp protocol out of the box. Stomp is a text based protocol for messaging that has clients for many platforms and languages.
ActiveMQ's documentation should contain information on how to set up a connector for stomp. In its simplest form, enabling a connector would look something like:
<transportConnectors>
<transportConnector name="stomp" uri="stomp://localhost:61613"/>
</transportConnectors>
Once enabled on the broker side, you can then use any python library that supports stomp. You can then use Stomp on the python side and JMS on the java side for communication with the broker and sending/receiving from specific destinations.
You might want to take a look at OpenAMQ and another look at RabbitMQ.
The underlying messaging technology used by RabbitMQ and OpenAMQ is AMQP. You should be able to easily find Python and Java clients that work against both of these brokers (and ostensibly any other spec-compliant broker).
If JMS is a must-have, then you might be able to find a JMS client out there implemented on top of AMQP (OpenAMQ provided such a client at one time, but I am unsure of its current status).
We had been using GlassFish Message Queue (formerly Sun Java MQ) - it is inherited from OpenMQ
It satisfies most of your requirements, if not all.
We had been using fail over-clustered brokers in Red Hat Linux (RHEL) - it is reliable for heavy usage. Though some 'quirks' lurk here and there.
I have used JMS in the past to build application and it works great. Now I work with Architects that would love to use the Spec : SOAP over Java Message Service 1.0.
This spec seams overly complicated.
I do not see many implementation (Beside the vendors pushing for the spec).
Does anyone here is using this specification in a production environment?
What is your main benefit of using this spec?
Link: http://www.w3.org/TR/2009/CR-soapjms-20090604/
I had the bad luck using SOAP over JMS. It does make some sense, if it is used for fire-and-forget operations (no response message defined in the WSDL). In this case you can use the WSDL to generate client skeletons and you can store the WSDL in your service registry. Plus you get all the usual benefits of JMS (decoupling sender and receiver, load-balancing, prioritising, security, bridging to multiple destinations - e.g. non-intrusive auditing).
On the other hand SOAP is mainly used for request/reply type operations. Implementing request/reply pattern over JMS introduces the following problems:
Impossible to handle timeouts properly. You never know if a request is still waiting for delivery or got stuck in the called component.
Responses are typically sent on temporary queues. If the client disconnects before receiving the response and there is no explicit time to live set on the response message, the temp queue can get stuck in the JMS server until you restart it.
Having a JMS server in the middle dramatically increases roundtrip times and adds unnecessary compexity.
JMS provides a reliable transport medium that decouples the sender from the receiver, but in case of request/reply the client should not be decoupled from the server. The client needs to know if the server is up and available.
The only advantage I could think about is that the server can be moved or load-balanced without the client knowing about it, but using UDDI and HTTP load balancer is a better solution.
I'd say that from an Architect's prospecting the same question would be about why having a 5 layer Internet model, with the 5th being the application when one could simply code the entire application at the socket level. To abstract out the Transport layer (JMS in your case) from what your application produces or consumes (SOA messages) is a good practice for may reasons amongst which independent unit testing, and future migration to other platforms are the first to come to my mind
Goddammit, I hate working with Architect Astronauts. I feel your pain brother. Do they actually have a actual, functional reason for doing so other than "it's a standards"? Is this decision going to lock them into a specific EE container vendor (say WebSphere)? That is so 2002; very few people have a real need for it; and in fact, SOAP has been pretty much ignored by most practical, successful implementations. Unless they have a real need for more reliability than what it is provided by JMS or SOAP-over-HTTP alone, you are in for a trip.
Check out the Apache CXF site for some examples (specific to CXF).
http://cxf.apache.org/docs/soap-over-jms-10-support.html
The rule of thumb would be to really use the bare minimums, and not the full stack. If your architect astronauts still insist in using the whole thing, you might just be walking into a world of pain. Sorry.
EDIT:
BTW, what application container will you be using? WebLogic, JBoss, WebSphere? And which web service framework? Apache CFX, Axis?
Architects astronauts will love to say that those are implementation details. Bull. Any decision on a system whose change carriers a great cost (or whose implementation carries significant savings) is an architectural decision. These pretty much dictate how things will be implemented (and what the cost of change will be) so determining early on which you will be using is an architectural decision except with very self-contained systems.
A few more links on this controversial subject:
http://www.subbu.org/blog/2005/03/soap-over-jms
http://parand.com/say/index.php/2005/03/29/soap-over-jms-no-such-thing/
SOAP/JMS and SOAP/HTTP are used for different scenarios albeit with Message Fire and Request/Response.
SOAP/JMS is actually terrific for propagating discovered (if required converted) messages to multiple sourecs simply by usage of SoapAction and
targetService. The JMS Specs also help in complex routing using the headers.
In Fact, UDDI as well as build servers can, is AND has been used as sources to discover published WSDLs (inline) from massive middleware deployments (Irrespective of engine architecture) as a SOAP/JMS Message to singular SOA Repository Sinks. Very Important in Enterprise Governance
Hence it is of utmost importance for wire tap patterns essentially when asynchronicity is of paramount importance.
SOAP/HTTP and now REST (with the verb noun model) work best for trusted sub-system calls
Image you implemented a frequently used Web-Service, that
tends to run ouf threads, while you promised, that no message
will be lost.
A Webservice implementation (the server) that runs over a
session bean comes with a limited amount of threads (say n
active PE in your pool), that may run n web-service request
concurrently. What will happen to the n+1 request ?
MRDE, didn't you promised you application owner, that no
message will be lost. So the JMS quaranties this functionality.
The Webservice skeleton only has to store the data in a queue,
and this give reliability also with regard to load-peaks.
The interesting thing about WS over JMS is, that the elapsed time
of a running WS-request is quite short, so the computing ressouce
will be back immediately to server the next request.
From here :
SOAP over JMS offers an alternative messaging mechanism to SOAP over
HTTP. While it is not yet standardized and hence may not be
interoperable across platforms, SOAP over JMS offers more reliable and
scalable messaging support than SOAP over HTTP. As JAX-RPC and JSR-109
become integral parts of the J2EE standard, enterprise messaging in
Web services using SOAP over JMS will become well-established.