Queue migration from one EMX Server to Other - java

We need advice on this queue migration topic.
There are two JMS providers (PROVIDER and NEWPROVIDER ) to connect to two instances of EMX. Each has some queues. Now we want to move all queues from PROVIDER to NEWPROVIDER at EMX side. At WebSphere admin console, minimum effort solution is to change the URL and authentication of PROVIDER to be same as NEWPROVIDER .
I don’t see any immediate issue as it may only seem to impact connection pool, max connection etc, but not sure to take changes to Production.
Question: Is there any issue in this approach or we should delete and recreate all queues under NEWPROVIDER.
To help understand this change faster I have created a diagram. Blue color shows current state, Red shows changes and to be state.
We are using WebSphere6.1/JMS(EMX)/Oracle.

Pointing WebSphere to the new EMS instance, as you've illustrated, is pretty straight-forward. The main question is: do all the relevant destinations (queues, topics, durable subscribers if any) exist on the new EMS instance? In other words, has the exact configuration been replicated from the existing EMS instance to the new instance? Will all WebSphere services have access to the data they require in order to operate with the new EMS instance? Will in-flight (undelivered/unacknowledged) messages that may live in queues on the old instance need to be available on the new instance?
If you're simply moving the store-files and conf files over, then all of this would essentially be taken care of.

As Larry mentioned, it is quite straightforward. There was no question of any issue for in-flight messages etc. This is because it's handled by different team and not part of this question.
We are successfully managed to complete the task by changing URL and authentication details. We did not delete all queues and we did not recreate them in NEWPROVIDER. We just pointed PROVIDER to be same as NEWPROVIDER.

Related

QuickFIX/J - failover strategy

I would like to ask about a couple of failover strategies for QuickFIX/J and Spring Boot QuickFix starter
For example if I have a FIX engine server and receiving a lot of FIX messages during all day and suddenly the service becomes unavailable.
What starts to happen when the service goes up again? Where will it start to read the new FIX messages again?
What will happen when the service starts to have a heavy load and kubernetes starts putting a second instance? Is there any way to keep data consistency between two microservices so that they do not process the same message twice?
How to deal with multiple sessions on multiple microservices and scaling at the same time
Thanks for response, I'm just starting with this library
The FIX engine will synchronise the messages based on the last message's sequence number that it has received. You can read about the basics here: FIX message recovery
Since you are new to the FIX protocol that whole page might be a good starting point to make yourself acquainted with the protocol. Of course the FIX engine will do the session-level related stuff on its own but it's always good to know the basics.
I don't really have any in-depth knowledge of Kubernetes but the important thing here is that a FIX session is a point-to-point connection. That means for the very same session (identified by a SessionID which usually is composed of BeginString (e.g. FIX.4.4), SenderCompID, TargetCompID) you will only have one Initiator (i.e. client) and one Acceptor (i.e. server).
So spinning up a second instance of a service that will connect to the same FIX session should be avoided. This would probably work if you had several sessions distributed over several instances.
Don't really know what you mean by this, sorry.

Is there a way to Queue REST call with Spring?

I am working on a project that is making a REST call to another Service to save DATA on the DB. The Data is very important so we can't afford losing anything.
If there is a problem in the network this message will be lost, which can't happen. I already searched about Spring Retry and I saw that it is designed to handle temporary network glitches, which is not what I need.
I need a method to put the REST calls in some kind of Queue (Like Active MQ) and preserve the order (this is very important because I receive Save, Delete and Update REST calls.)
Any ideas? Thanks.
If a standalone installation of ActiveMQ is not desirable, you can use an embedded in-memory broker.
http://activemq.apache.org/how-do-i-embed-a-broker-inside-a-connection.html
Not sure if you can configure the in-memory broker to use KahaDB. Of course the scope of persistence will be limited to your application process i.e. messages in the queue will not be available if the application is restarted . This is probably the most important reason why in-memory or vanilla code based approaches are no good.
If you must reinvent the wheel, then have a look at this topic, which talks about using Executors and BlockingQueue to implement your own pseudo-MQ.
Producer/Consumer threads using a Queue.
On a side note, retry mechanism is not something provided by the MQ broker. It is the client that implements it. Be it ActiveMQs bundled client library or other Messaging libraries such as Camel.
You can also retrospect your current tech-stack to see if any of the existing components have JMS capabilities. For example: Oracle database bundles an MQ called Oracle AQ
Have your service keep its own internal queue of jobs, and only move onto the next REST call until the previous one returns a success code.
There are many better ways to do this but the limitation will come down to what your company will let you do.

WebSphere propagating changes to customized cache across all servers in a cluster

Food We are using WAS 7.0, with a customized local cache in a clustered environment. In our application there is some very commonly used (and very seldomly updated) reference information that is retrieved from the database and stored in a customized cache on server start up (We do not use the cache thru the application server). New Reference values can be entered through the application, when this is done, the data in the database is updated and the cached data on that single server (1 of 3) is reflected. If a user hits any of the other servers in that cluster they will not see the newly entered reference values (unless the server is bounced). Restarting the cluster is not a feasible solution as it will bring down production. My question is how do I tell the other servers to update their local cache as well?
I looked at JMS publish/subscribe mechanism. I was thinking of publishing a message every time I update values of any reference table, and the other servers can act as subscribers and get the message and refresh their cache. The servers need to act as both a publisher and a subscriber. I am not sure how to incorporate this solution to my application. I am open to suggestions and other solutions as well.
In general you should consider Dynamic cache service provided by application server. It has already replication options out of the box. Check Using the DistributedMap and DistributedObjectCache interfaces for the dynamic cache for more details.
And regarding your custom JMS solution - you should have MDB in your application, which will be configured to the topic. Once cache is changed, your application will publish change to that topic. MDB reads that message and updates the local cache.
But since it is quite complex change, I'd strongly consider switching to the built in cache.

How to do distributed transaction in java involving jdbc,jms and webservices

I was asked the following question in an interview and couldn't answer that.
How do you include a jdbc operation,a web service call and a JMS operation into one single transaction. That means if one of them fails all has to be roll backed.
I have heard about two-phase commit protocol and oracl XA in case of database transactions involving multiple databases. But not sure whether the same can be used here.
The critical factor is that the web services you connect to have been built using a web services framework that supports transactions. JBoss Narayana is one such web services framework. Once the web services endpoint you are connecting to is on such a framework, it's just a matter of configuring spring to use the appropriate client.
In the case of Narayana, the spring config (from http://bgshinhung.blogspot.ca/2012/10/integrating-spring-framework-jetty-and.html) for transactions with web services:
You are never going to be able to do this in a completely bomb-proof way as the systems are separate. A failure in one stage of the system (for example between the SQL commit and the JMS commit the power on your server gets turned off) will leave the SQL commit in place.
The only way to resolve that would be to keep some record of partial commits somewhere and scan that on startup to fix any resulting problems but now what happens if you have a failure processing or keeping that list.
Essentially the solution is to do your own implementation of the multiple-stage-commit and rollback process wrapping the three operations you need to make. If any of the operations fails then you need to reverse (preferably using an internal transaction mechanism, if not then by issuing reversing commands) any that have been done so far.
There are a lot of corner cases and potential ways for a system like this to fail though, so really the first approach should be to consider whether you can redesign the system so you don't need to do this at all!
It may be trick question and the right answer is "it can not be done".
But I would try to pseodo-code something like this:
try{
jdbc.startTransaction();
Savepoint saveJdbc = jdbc.setSavepoint();
JMS.startTransaction();
Savepoint saveJMS = JMS.setSavepoint();
jdbs.doSomeStuff();
JMS.doSomeStuff();
jdbc.commit();
JMS.commit();
if(webServise.doSomeStuff() == fail){throw new Exception();}
}
catch(Eception e){
jdbc.rollback(saveJdbc);
JMS.rollback(saveJMS);
}
You prepare one servise that has roll back. You prepare second servise that has roll back. You will try web servise and if web servise fail you will roll back those two which have rollback.
May be it is a way to implement rollback to your web servise.
We had same situation like web service will push the data, we have to read the xml stream and persist to db(oracle). implementation we followed is.
Web service send soap message and that will contain xml stream data.
all request soap messages pushed to jms.
respective listner will read the stream and persist the data into 'Temporary tables'.
if request processed successfully then move data from temp table to actual table.
if any error roll back.
hope above points may help.
To my mind, it looks like interviewer liked to understand your ability to think in terms of enterprise wide distribution. Few points:
JDBC is used for Database connectivity
WebService is probably a mechanism to send control command to a
server from any client.
JMS is mainly used for alerts of what is being happened in the
system.
My guess is your interviewer might be having a typical scenario with him that they wish to suffice the following situation:
Data is on one tier ( cluster, or machine )
Clients may be any kind, mobile, app, ios, objective c, browser pay, etc.
JMS is configured to listen to topics. Or is that he wishes he could do that.
Now probably the best approach is to write a JMS Subscriber which decides what to do in the onMessage() method. As an example, suppose a web service is initiated a payment request from client. This will initiate a JMS publisher to tell a DAO do the necessary internal connection to database and when transaction is in middle and when it finishes, one message will be published to subscriber. You will have full grain control of every step as that would be configured to be published through JMS. Though this is difficult to achieve, this could be your interviewer's expected approach from you. (This is Only my guess, and please note.)

Communicate between tomcat instances (Distributed Architecture)

We have a distributed architecture where our application runs on four Tomcat instances. I would like to know the various options available for communicating between these Tomcat instances.
The details : Say a user sends a request to stop listening to the incoming queues, this needs to be communicated with other Tomcat instances so that they stop their listeners as well. How can this communication be made across Tomcats?
Thanks,
Midhun
Looks like you are facing coordination problem.
I'd recommend you to use Apache ZooKeeper for this kind of the problems.
Consider putting your configuration to the ZooKeeper. ZooKeeper allows you to watch for the changes and if configuration was changed in ZooKeeper tomcat instance will be notified and you can adjust the behavior of your application on every node.
You can use any kind of external persistent storage to solve this problem, though.
Other possible way is to implement communication between tomcat nodes by yourself but in this case you'll have a problem with managing your deployment topology: every tomcat node should know about other nodes in the cluster.
what lies on the surface is RMI, HTTP requests. As well, IMHO, you could try to use MBeans. One more thing, you could use some non-java related things, like DBus or something, or even flat files... if all tomcats run on the same machine. Lots of options...
We use Hazelcast for this kind of scenario. They have an handy Http Session Clustering

Categories

Resources