Can a WebLogic Singleton Service be used for wait/notify? - java

We have an app which maintains a HashMap in memory keyed by specific user IDs and had values representing certain system events. The basic functionality is that the user makes a request to the web server which checks the HashMap for any events keyed by their ID otherwise waits for a short amount of time on the HashMap until they either time out or a notify is executed on the HashMap which wakes the client up and immediately processes the event.
This was working fine in a single server environment but we are moving to a clustered environment and unsure of the best way to handle this particular piece.
Thinking we need to utilize database to queue up these events and lose that instant callback effect from wait/notify unless it is possible to somehow achieve that using the Singelton Service feature. Using Singleton Service would we be able to wait on an object from one server and get notified by a thread on the other server in the cluster?

I would suggest you use JMS for that. JMS is cluster-friendly and also can be configured to persist the events either in a file storage or database. Also you can select from 2 models: queue or topic depending on how your users need to be handled.

Related

Hazelcast - Queue with no listeners, or broadcasting to one of many subscribers

I have an architecture where there are a set of "daemon" processes that form my platform. These daemon processes are full Hazelcast members and are the datastore for all data in the application. The actual business logic is segregated from the daemons and resides in a large number of microservice style components that are located either physically on the same server or on different machines (vms, containers, etc). The services can modify data in the datastore and subscribe to events in the datastore from the daemons, but the model is actually quite different and abstracted from Hazelcast's map view so my events are not as simple as listening to map modifications but are generated when multiple maps are modified in certain ways. The service clients (Hazelcast lite members) define the events that they want to listen to. The catch is, multiple instances (any number) of each flavour of service component could be running and I only want one instance (any one) to handle the each event (i.e. round-robin or load balancing).
My current solution is to use a Hazelcast queue. The Daemon's listen to events on maps and decide when to trigger an event based on those maps. The daemon that is the owner of the key is the one that will trigger the event so that the event is only triggered in one place. I push this event onto a queue, which each instance of a listener for this event is connected to. Thus, whoever gets to the event first processes it.
For example, I have a datasource microservice called IncomingBondPrices that puts the prices into the daemon datastore. I have 10 instances of a separate microservice called priceProcessor. When a price reaches a certain threshold the daemons trigger an event (let's call it "PriceThresholdReached"). I want one and only one of the 10 instances of priceProcessor to handle each event so if I am streaming in hundreds or thousands of prices the load of handling the events is split across my instances of priceProcessor.
My concern is what happens if there are no consumers? I can't find any way to count the number of consumers on a hazelcast queue. The system is entirely dynamic, the services start-up and send the definitions of events that their interested in to the daemons. It is possible for 1, 2, 20, or 100 instances of any given service to be started and it is possible that they may all be shut down and there will no longer be any subscribers for the event. If there are currently no subscribers to a given event i'd like to destroy the queue and not push any events to it. I do not want events to queue up if there are no subscribers...
How could I go about managing this? The only way I can come up with is to keep a count of the subscribers for each event type in the daemons and destroy the queues when that drops to 0. But my concern is that services will most likely be killed without a graceful shutdown so they won't have a chance to explicitly tell the daemon they're not listening anymore. Managing this would require me to explicitly check that all members are still alive or subscribe to the events when Hazlecast has found that a member has disconnected and then track down all if that member's subscriptions to end them. Is there a better way to do this? It seems overly complex. Ideally what I would like is for some way to find on the queue how many current members are running a take() on the queue at any given time and if that is 0 and there is no data on the queue then destroy it.
Thank-you,
Troy.
What I can suggest to you, is create a dedicated ISet (or IMap) with the name "registerConsumers" for instance. Each of consumers writes its id into the set and removes it on shutdown hook.
Producers checks initially the set and registers an ItemListener to be updated. The question what to do, if process of the listener failed without good luck? Hope to load balancing - will start a new instance and you will see new one. If you used IMap, then consumer cans update its time (in the value of the map) periodically, while producer checks periodically last update and removes guys which did not update time. This way, if yo see that there are no consumers, then simply persist data in another storage, waiting up to a consumer available. Why to destroy queues- finally a consuming microservice must start at a time.

How to maintain order of tasks for asynchrounous operations using multithreading

I am writing a java application where the central datastructure will be updated both with the request and corresponding response from external systems.
How can I make sure there are no race conditions? Below is how I implemented.
I receive request from GUI and I process it and store in hashmap of hashmap and then forward the request to external system for which I get the response asynchronously. When I receive the response based on some id that I sent earlier, I update the datastructure (hashmap of hashmap)
I created one thread that handles request from GUI and another for handling responses from external system.
I have created 2 linkedblockingqueues - one for requests and another for responses
I am using executor service to create multiple threads for request & response.
How do I make sure things are executed in order ?
This is an order management system and I dont want an amend to be sent before new order is sent.
Use Hastable, it is the synchronized implementation of a Map.
The synchronized implementation will prevent that more than one thread accesses it at the same time.
https://docs.oracle.com/javase/8/docs/api/java/util/Hashtable.html

Request-Reply through a Queue with Hazelcast

I wonder if I can do request-reply with this:
1 hazelcast instance/member (central point)
1 application with hazelcast-client sending request through a queue
1 application with hazelcast-client waiting for requests into the queue
The 1st application also receives the response on another queue posted by the second application.
Is it a good way to proceed? Or do you think of a better solution?
Thanks!
The last couple of days I also worked on a "soa like" solution using hazelcast queues to communicate between different processes on different machines.
My main goals were to have
"one to one-of-many" communication with garanteed reply of one-of-the-many's
"one to one" communication one way
"one to one" communication with answering in a certain time
To make a long story short, I dropped this approach today because of the follwoing reasons:
lots of complicated code with executor services, callables, runnables, InterruptedException's, shutdown-handling, hazelcast transactions, etc
dangling messages in case of the "one to one" communciation when the receiver has shorter lifetime than the sender
loosing messages if I kill certain cluster member(s) at the right time
all cluster members must be able to deserialize the message, because it could be stored anywhere. Therefore the messages can't be "specific" for certain clients and services.
I switched over to a much simpler approach:
all "services" register themselves in a MultiMap ("service registry") using the hazelcast cluster member UUID as key. Each entry contains some meta information like service identifier, load factor, starttime, host, pid, etc
clients pick a UUID of one of the entries in that MultiMap and use a DistributedTask (distributed executor service) for the choosen specific cluster member to invoke the service and optionally get a reply (in time)
only the service client and the service must have the specific DistributedTask implementation in their classpath, all other cluster members are not bothered
clients can easily figure out dead entries in the service registry themselves: if they can't see a cluster member with the specific UUID (hazelcastInstance.getCluster().getMembers()), the service died probably unexpected. Clients can then pick "alive" entries, entries which fewer load factor, do retries in case of idempotent services, etc
The programming gets very easy and powerful using the second approach (e.g. timeouts or cancellation of tasks), much less code to maintain.
Hope this helps!
In the past we have build a SOA system that uses Hazelcast queue's as a bus. Here is some of the headlines.
a. Each service has an income Q. Simply service name is the name of the queue. You can have as many service providers as you wish. You can scale up and down. All you need is these service providers to poll this queue and process the arrived requests.
b. Since the system is fully asynchronous, to correlate request and response, there is also a call id both on request and response.
c. Each client sends a request into the queue of the service that it wants to call. The request has all the parameters for the service, a name of the queue to send the response and a call id. A queue name simply can be the address of the client. This way each client will have it's own unique queue.
d. Upon receiving the request, a service provider processes it and sends the response to the answer queue
e. Each client also continuously polls its input queue to receive the answers for the requests that it send.
The major drawback with this design is that the queues are not as scalable as maps. Thus it is not very scalable. Hoever it still can process 5K requests per seconds.
I made a test for myself and validated that it works well with certain limitation.
The architecture is Producer-Hazelcast_node-Consumer(s)
Using two Hazelcast queues, one for Request, one for Response, I could mesure a round trip of under 1ms.
Load balancing is working fine if I put several consumers of the Request queue.
If I add another node, and connect the clients to each node, then the round trip is above 15ms. This is due to replication between the 2 hazelcast nodes. If I kill a node, the clients continue to work. So failover is working at the cost of time.
Can't you use the correlation id to perform request-reply on a single queue in hazelcast? That's the id that should uniquely define a conversation between 2 providers/consumers of a queue.
What is the purpose of this setup #unludo ?. I am just curious

JMS Client Session Usage

I'm attempting to utilize the .NET Kaazing client in order to interact with a JMS back-end via web sockets. I'm struggling to understand the correct usage of sessions. Initially, I had a single session shared across all threads, but I noticed that this was not supported:
A Session object is a single-threaded context for producing and consuming messages. Although it may allocate provider resources outside the Java virtual machine (JVM), it is considered a lightweight JMS object.
The reason I had a single session was just because I thought that would yield better performance. Since the documentation claimed sessions were lightweight, I had no hesitation switching my code over to use a session per "operation". By "operation" I mean either sending a single message, or subscribing to a queue/topic. In the former case, the session is short-lived and closed immediately after the message is sent. In the latter case, the session needs to live as long as the subscription is active.
When I tried creating multiple sessions I got an error:
System.NotSupportedException: Only one non-transacted session can be active at a time
Googling this error was fruitless, so I tried switching over to transacted sessions. But when attempting to create a consumer I get a different error:
System.NotSupportedException: This operation is not supported in transacted sessions
So it seems I'm stuck between a rock and a hard place. The only possible options I see are to share my session across threads or to have a single, non-transacted session used to create consumers, and multiple transacted sessions for everything else. Both these approaches seem a little against the grain to me.
Can anyone shed some light on the correct way for me to handle sessions in my client?
There are several ways to add concurrency to your application. You could use multiple Connections, but that is probably not desirable due to an increase in network overhead. Better would be to implement a simple mechanism for handling the concurrency in the Message Listener by dispatching Tasks or by delivering messages via ConcurrentQueues. Here are some choices for implementation strategy:
The Task based approach would use a TaskScheduler. In the MessageListener, a task would be scheduled to handle the work and return immediately. You might schedule a new Task per message, for instance. At this point, the MessageListener would return and the next message would be immediately available. This approach would be fine for low throughput applications - e.g. a few messages per second - but where you need concurrency perhaps because some messages may take a long time to process.
Another approach would be to use a data structure of messages for work pending (ConcurrentQueue). When the MessageListener is invoked, each Message would be added to the ConcurrentQueue and return immediately. Then a separate set of threads/tasks can pull the messages from that ConcurrectQueue using an appropriate strategy for your application. This would work for a higher performance application.
A variation of this approach would be to have a ConcurrentQueue for each Thread processing inbound messages. Here the MessageListener would not manage its own ConcurrentQueue, but instead it would deliver the messages to the ConcurrentQueue associated with each thread. For instance, if you have inbound messages representing stock feeds and also news feeds, one thread (or set of threads) could process the stock feed messages, and another could process inbound news items separately.
Note that if you are using JMS Queues, each message will be acknowledged implicitly when your MessageListener returns. This may or may not be the behavior you want for your application.
For higher performance applications, you should consider approaches 2 and 3.

Would a JMS Topic suffice in this situation? Or should I look elsewhere?

There is one controlling entity and several 'worker' entities. The controlling entity requests certain data from the worker entities, which they will fetch and return in their own manner.
Since the controlling entity can agnostic about the worker entities (and the working entities can be added/removed at any point), putting a JMS provider in between them sounds like a good idea. That's the assumption at least.
Since it is an one-to-many relation (controller -> workers), a JMS Topic would be the right solution. But, since the controlling entity is depending on the return values of the workers, request/reply functionality would be nice as well (somewhere, I read about the TopicRequester but I cannot seem to find a working example). Request/reply is typical Queue functionality.
As an attempt to use topics in a request/reply sort-of-way, I created two JMS topis: request and response. The controller publishes to the request topic and is subscribed to the response topic. Every worker is subscribed to the request topic and publishes to the response topic. To match requests and responses the controller will subscribe for each request to the response topic with a filter (using a session id as the value). The messages workers publish to the response topic have the session id associated with them.
Now this does not feel like a solution (rather it uses JMS as a hammer and treats the problem (and some more) as a nail). Is JMS in this situation a solution at all? Or are there other solutions I'm overlooking?
Your approach sort of makes sense to me. I think a messaging system could work. I think using topics are wrong. Take a look at the wiki page for Enterprise Service Bus. It's a little more complicated than you need, but the basic idea for your use case, is that you have a worker that is capable of reading from one queue, doing some processing and adding the processed data back to another queue.
The problem with a topic is that all workers will get the message at the same time and they will all work on it independently. It sounds like you only want one worker at a time working on each request. I think you have it as a topic so different types of workers can also listen to the same queue and only respond to certain requests. For that, you are better off just creating a new queue for each type of work. You could potentially have them in pairs, so you have a work_a_request queue and work_a_response queue. Or if your controller is capable of figuring out the type of response from the data, they can all write to a single response queue.
If you haven't chosen an Message Queue vendor yet, I would recommend RabbitMQ as it's easy to set-up, easy to add new queues (especially dynamically) and has really good spring support (although most major messaging systems have spring support and you may not even be using spring).
I'm also not sure what you are accomplishing the filters. If you ensure the messages to the workers contain all the information needed to do the work and the response messages back contain all the information your controller needs to finish the processing, I don't think you need them.
I would simply use two JMS queues.
The first one is the one that all of the requests go on. The workers will listen to the queue, and process them in their own time, in their own way.
Once complete, they will put bundle the request with the response and put that on another queue for the final process to handle. This way there's no need for the the submitting process to retain the requests, they just follow along with the entire procedure. A final process will listen to the second queue, and handle the request/response pairs appropriately.
If there's no need for the message to be reliable, or if there's no need for the actual processes to span JVMs or machines, then this can all be done with a single process and standard java threading (such as BlockingQueues and ExecutorServices).
If there's a need to accumulate related responses, then you'll need to capture whatever grouping data is necessary and have the Queue 2 listening process accumulate results. Or you can persist the results in a database.
For example, if you know your working set has five elements, you can queue up the requests with that information (1 of 5, 2 of 5, etc.). As each one finishes, the final process can update the database, counting elements. When it sees all of the pieces have been completed (in any order), it marks the result as complete. Later you would have some audit process scan for incomplete jobs that have not finished within some time (perhaps one of the messages erred out), so you can handle them better. Or the original processors can write the request to a separate "this one went bad" queue for mitigation and resubmission.
If you use JMS with transaction, if one of the processors fails, the transaction will roll back and the message will be retained on the queue for processing by one of the surviving processors, so that's another advantage of JMS.
The trick with this kind of processing is to try and push the state with message, or externalize it and send references to the state, thus making each component effectively stateless. This aids scaling and reliability since any component can fail (besides catastrophic JMS failure, naturally), and just pick up where you left off when you get the problem resolved an get them restarted.
If you're in a request/response mode (such as a servlet needing to respond), you can use Servlet 3.0 Async servlets to easily put things on hold, or you can put a local object on a internal map, keyed with the something such as the Session ID, then you Object.wait() in that key. Then, your Queue 2 listener will get the response, finalize the processing, and then use the Session ID (sent with message and retained through out the pipeline) to look up
the object that you're waiting on, then it can simply Object.notify() it to tell the servlet to continue.
Yes, this sticks a thread in the servlet container while waiting, that's why the new async stuff is better, but you work with the hand you're dealt. You can also add a timeout to the Object.wait(), if it times out, the processing took to long so you can gracefully alert the client.
This basically frees you from filters and such, and reply queues, etc. It's pretty simple to set it all up.
Well actual answer should depend upon whether your worker entities are external parties, physical located outside network, time expected for worker entity to finish their work etc..but problem you are trying to solve is one-to-many communication...u added jms protocol in your system just because you want all entities to be able to talk in jms protocol or asynchronous is reason...former reason does not make sense...if it is latter reason, you can choose other communication protocol like one-way web service call.
You can use latest java concurrent APIs to create multi-threaded asynchronous one-way web service call to different worker entities...

Categories

Resources