This is related to the Timer Service:
To save a Timer object for future reference, invoke its getHandle
method and store the TimerHandle object in a database. (A TimerHandle
object is serializable.) To reinstantiate the Timer object, retrieve
the handle from the database and invoke getTimer on the handle. A
TimerHandle object cannot be passed as an argument of a method defined
in a remote or web service interface. In other words, remote clients
and web service clients cannot access a bean’s TimerHandle object.
Local clients, however, do not have this restriction.
I'd like to know if it is somehow possible to do something similar with tasks scheduled by the ManagedScheduledExecutorService.
What I'd like to implement:
I'm working on a web app that displays different reports (e.g. tables, charts etc.) based on corporate data. The user is able to export a specific report to pdf.
Now, I'd like to give the user the possibility to subscribe to a specific report, meaning he wants to get that report mailed weekly, monthly, or to whatever time he configures while subscribing. Of course, also unsubscribing should be possible at any time, and for this I think I need to refer to some persisted user-related task information (like the handler in the Timer Service) that leads me to the task to be canceled.
The reason why I'm opting for the Executor Service is - as far as I've investigated on - that it is better suited for concurrent batch jobs. The Timer Service schedules all the tasks on a single thread, which might lead to blocking tasks.
Related
I have an architecture where there are a set of "daemon" processes that form my platform. These daemon processes are full Hazelcast members and are the datastore for all data in the application. The actual business logic is segregated from the daemons and resides in a large number of microservice style components that are located either physically on the same server or on different machines (vms, containers, etc). The services can modify data in the datastore and subscribe to events in the datastore from the daemons, but the model is actually quite different and abstracted from Hazelcast's map view so my events are not as simple as listening to map modifications but are generated when multiple maps are modified in certain ways. The service clients (Hazelcast lite members) define the events that they want to listen to. The catch is, multiple instances (any number) of each flavour of service component could be running and I only want one instance (any one) to handle the each event (i.e. round-robin or load balancing).
My current solution is to use a Hazelcast queue. The Daemon's listen to events on maps and decide when to trigger an event based on those maps. The daemon that is the owner of the key is the one that will trigger the event so that the event is only triggered in one place. I push this event onto a queue, which each instance of a listener for this event is connected to. Thus, whoever gets to the event first processes it.
For example, I have a datasource microservice called IncomingBondPrices that puts the prices into the daemon datastore. I have 10 instances of a separate microservice called priceProcessor. When a price reaches a certain threshold the daemons trigger an event (let's call it "PriceThresholdReached"). I want one and only one of the 10 instances of priceProcessor to handle each event so if I am streaming in hundreds or thousands of prices the load of handling the events is split across my instances of priceProcessor.
My concern is what happens if there are no consumers? I can't find any way to count the number of consumers on a hazelcast queue. The system is entirely dynamic, the services start-up and send the definitions of events that their interested in to the daemons. It is possible for 1, 2, 20, or 100 instances of any given service to be started and it is possible that they may all be shut down and there will no longer be any subscribers for the event. If there are currently no subscribers to a given event i'd like to destroy the queue and not push any events to it. I do not want events to queue up if there are no subscribers...
How could I go about managing this? The only way I can come up with is to keep a count of the subscribers for each event type in the daemons and destroy the queues when that drops to 0. But my concern is that services will most likely be killed without a graceful shutdown so they won't have a chance to explicitly tell the daemon they're not listening anymore. Managing this would require me to explicitly check that all members are still alive or subscribe to the events when Hazlecast has found that a member has disconnected and then track down all if that member's subscriptions to end them. Is there a better way to do this? It seems overly complex. Ideally what I would like is for some way to find on the queue how many current members are running a take() on the queue at any given time and if that is 0 and there is no data on the queue then destroy it.
Thank-you,
Troy.
What I can suggest to you, is create a dedicated ISet (or IMap) with the name "registerConsumers" for instance. Each of consumers writes its id into the set and removes it on shutdown hook.
Producers checks initially the set and registers an ItemListener to be updated. The question what to do, if process of the listener failed without good luck? Hope to load balancing - will start a new instance and you will see new one. If you used IMap, then consumer cans update its time (in the value of the map) periodically, while producer checks periodically last update and removes guys which did not update time. This way, if yo see that there are no consumers, then simply persist data in another storage, waiting up to a consumer available. Why to destroy queues- finally a consuming microservice must start at a time.
I'm writing a webserver with spring(mvc,data,security) which is serving tasks to physical devices(device count is around 100).
Device doesn't have query implementation inside. For example to execute some task u need write something like this:
Device driver = new DeviceDriver();
driver.setSettings(settingsJson);
driver.open(); // noone else can't connect to this device, open() can take up to 1 second
driver.setTask(taskJson);
driver.processTask(); // each task takes a few seconds to execute
String results = driver.getResults();
driver.close();
I'm not really an expert in designing architecture, so for now implemented webserver like this:
TaskController(#RestController) - processing incoming Post requests with tasks and persisting them to database.
DeviceService(#Service) - has init method, which gets list of devices from DB and creates/starts one worker per device. It passes taskRepository to each worker, so worker inside can save results of tasks.
Worker - extends Thread, it gets next task from database with certain period(via loop with sleep). When task executed worker saves result to db and updates status of task.
Does this approach makes any sense? Maybe there is better way to do this using spring components instead of Thread.
I would not create workers for each device (client). Because your controller will be able to serve concurrent requests being deployed on a thread-per-request based server. Additionally, this is not scalable at all- what if there is a new device on-boarded? You need to make changes on the database, restart the service with the current design!!
If you need device specific actions, you can just pass that on the request parameters from the device client. Therefore, there is no need to keep a predefined set of workers.
So, the design looks good except the worker set.
Use the #Scheduled annotation on your functions to build something like cron
I have an application that does a long running job and pushes the task in the task queue. Currently, when different users login to the application and start the upload job, the job merges with the existing task and expected output is not achieved.
What I need exactly is to run different instances of app engine application for every user as every user will be needing much amount of computing power and these instances must be dynamically get created when every new user is encountered.
I had referred different docs on instance classes and scaling types, but didn't get to know how to start a new instance for every different user.
Please also suggest if there is a better solution to this.
Though I have not used this myself, the documentation suggests this could do it for you:
<max-concurrent-requests>
Optional. The number of concurrent requests an automatic scaling instance can accept before the scheduler spawns a new instance
Try setting that value to 1 in your appengine-web.XML
(see the documentation link above for more info).
We have an app which maintains a HashMap in memory keyed by specific user IDs and had values representing certain system events. The basic functionality is that the user makes a request to the web server which checks the HashMap for any events keyed by their ID otherwise waits for a short amount of time on the HashMap until they either time out or a notify is executed on the HashMap which wakes the client up and immediately processes the event.
This was working fine in a single server environment but we are moving to a clustered environment and unsure of the best way to handle this particular piece.
Thinking we need to utilize database to queue up these events and lose that instant callback effect from wait/notify unless it is possible to somehow achieve that using the Singelton Service feature. Using Singleton Service would we be able to wait on an object from one server and get notified by a thread on the other server in the cluster?
I would suggest you use JMS for that. JMS is cluster-friendly and also can be configured to persist the events either in a file storage or database. Also you can select from 2 models: queue or topic depending on how your users need to be handled.
I need to wait for a condition in a Spring MVC request handler while I call a third party service to update some entities for a user.
The wait averages about 2 seconds.
I'm calling Thread.sleep to allow the remote call to complete and for the entities to be updated in the database:
Thread.currentThread().sleep(2000);
After this, I retrieve the updated models from the database and display the view.
However, what will be the effect on parallel requests that arrive for processing at this controller/request handler?
Will parallel requests also experience a wait?
Or will they be spawned off into separate threads and so not be affected by the delay experienced by the current request?
What are doing may work sometimes, but it is not a reliable solution.
The Java Future interface, along with a configured ExecutorService allows you to begin some operation and have one or more threads wait until the result is ready (or optionally until a certain amount of time has passed).
You can find documentation for it here:
http://download.oracle.com/javase/6/docs/api/java/util/concurrent/Future.html