Suppose I have this subscription query like this:
queryGateway.subscriptionQuery(
FetchListOfBookQuery,
ResponseTypes.multipleInstancesOf(Book::class.java),
ResponseTypes.multipleInstancesOf(Book::class.java)
)
So, it will subscribe to list of the Books in databsae and If I want to add a new book I would have something like this in my projection:
fun on(event: BookAddedEvent){
var book = repo.save(Book(event.bookId)).block()
queryUpdateEmitter.emit(
FetchListOfBookQuery::class.java,
{ it.bookId == book.bookId },
book
)
}
The problem is, since I only got one instance of a new Book which has been added, in order to update to the subscription query I need to have previous list of Books as well. Is there a way to get the previous update state of the subscription query and compare changes and finally update it?
The Subscription Query logic provided by Axon Framework allows you to retrieve an initial response and updates. In code, this translates itself to firstly hitting an #QueryHandler annotated method and secondly emitting the updates through the QueryUpdateEmitter.
What is being emitted is completely up to you. So if you decide to send the newly added Book in combination with all the previous Books, that is perfectly fine. As you have likely noticed though, the QueryUpdateEmitter does not store the updates itself, neither does the SubscriptionQueryResult on the query dispatching end.
Thus if you need logic to filter out what has been send with a previous update, you will have to build this yourself. To that end you could take the route of building a dedicated piece of logic, a service maybe, which does the job. Or, you could create your own QueryUpdateEmitter which enhances the behaviour to simplify the update being send.
I'd argue the latter would be the cleanest approach, for which I'd recommend wrapping the SimpleQueryUpdateEmitter. However, this could be quite some custom code, so I'd first check whether there is a different way around this requirement you are stating:
... but in order to update to the subscription query I need to have previous list of the books.
If you do end up on that route through bare necessity, I would be interested to see the outcome, or potentially help out with suggestions on the matter.
That's my two cents, hope this helps you out #Patrick!
Related
I have a drop down which lets you select the name of the country and then in the other bottom the province/state is populated based on the country selected.
When the user selects the country, a query is made and then province is updated respectively
I am running into a scenario where there's a race condition between the query response arrival, which ends up displaying the incorrect data . How should I handle this ?
eg: User selects country A, query is fired, network is slow, meanwhile user changes the country to B and then another request is fired, The response of B comes back quickly, but response of A comes after, Now the screen is in a state where country says B but the province are of A
Notice: I don't want to block the country selector while the query response is being awaited
any suggestions on resolving this ?
You may want to add a simple filter on your query response, such
that the code to display runs if the response matches the current
query. Configure your callback code carefully, and you can have only the latest
response displayed, since you know which country is currently selected.
(Admittedly, this might not the best way to do this, but it should work.)
More generally, you would have to develop this with some combination of network request(s) and caching/local database.
If the size of your dataset is not too large, or if it is not going to be updated too often, you might simply want to put it into a local database, e.g. Room DB.
If your data doesn't match this specification and it is subject to future updates in the back-end, but the overall dataset is pretty small, try caching the dataset (countries and provinces) at app start. This way your network calls are light, you always have the latest data, your UI is no longer dependent on asynchronous operations, and your code is simpler. You could also use data binding here, if appropriate.
If your dataset is very large such that fetching it all at app start is an unacceptable overhead, only then should you work purely with network request(s) each time the user changes a filter option. One option is that your callback code is configured to ignore any response that doesn't relate to the currently selected option. Another option is that your HTTP-related schema, for request and/or response, can contain some information about which "country" generated the HTTP request so that you can check for it in the response object. (The latter option can be considered overkill in most cases, but sometimes in complex scenarios, you might have to do such stuff for one or more reasons, such as functional definition, simplicity, neatness, etc.)
P.S.: I assume you are already using appropriate libraries for HTTP calls.
Hello I am creating an app where people essentially join groups to do tasks, and each group has a unique name. I want to be able to update each of the users document that has to do with a specific group without having to for loop each user and update with each iteration.
I want to know if its a good idea to have a unique key like this in mongoDB.
{
...
"specific_group_name": (whatever data point here)
...
}
in each of the users document, so I can just call a simple
updateToMany(eq("specific_group_name", (whatever data point here)), Bson object)
To decrease the run time that is involved, just in case there is alot of users within the group.
Thank you
Just a point to note, instead of a specific group name, better make sure that it's specific groupId. Also pay special attention to cases when you have to remove group from the people, and also if there's cases when a person in a particular group shouldn't receive this update.
What you want to do is entirely valid though. If you put specific_group_name/id in the collection, then you're moving the selection logic to db. If you're doing a one-by-one update, then you have more flexibility on how to select users to update on Java/application side.
If selection is simple (a.k.a always update people in this group) then go ahead
I have an Saleforce app that allows me to execute REST API calls, and I need to retrieve orders (/services/data/v47.0/sobjects/Order) by status.
I've found some manual that describes similar filtering on another entitiy (https://developer.salesforce.com/docs/atlas.en-us.api_placeorder.meta/api_placeorder/sforce_placeorder_rest_api_standalone.htm).
However when trying to execute followin request, it seems that all statuses returned:
GET /services/data/v47.0/sobjects/Order?order.status='ddd'
I also tried some variations of query params. Is this functionality supported?
/sobjects service will let you learn dynamically what fields (standard and custom) exist in Order table (or any other really), what types they are, picklist values...
To retrieve actual data you can use query resource. (Salesforce uses a dialect of SQL, called SOQL. If you've never used it before it'll look bit weird the moment you want to do any JOINs, would be nice if a SF developer would fill you in)
This might be a good start
/services/data/v47.0/query/?q=SELECT Id, Name, OrderNumber FROM Order WHERE Status = 'Draft' LIMIT 10
Never seen the API you've linked to, interesting stuff. But I don't see anything obvious that would let you filter by status there so the more generic "query anything you wish" might work better for you. Play a bit and perhaps https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_query.htm will suit your needs more?
My situation is that, given 3 following methods (I used couchbase-java-client 2.2 in Scala. And Version of Couchbase server is 4.1):
def findAll() = {
bucket.query(N1qlQuery.simple(select("*").from(i(DatabaseBucket.USER))))
.allRows().toList
}
def findById(id: UUID) = {
Option(bucket.get(id.toString, classOf[RawJsonDocument])).map(i => read[User](i.content()))
}
def upsert(i: User) = {
bucket.async().upsert(RawJsonDocument.create(i.id.toString, write(i)))
}
Basically, they are insert, find one by id and findAll. I did an experiment where :
I insert a User, then find one by findById right after that, I got a user that I have inserted correctly.
I insert and then I use findAll right after that, it returns empty.
I insert, put 3 seconds delay and then I use findAll, I can find the one that I have inserted.
By that, I suspected that N1qlQuery only search over cached layer rather than "persist" layer. So, how can I force to let it search on "persist" layer?
In Couchbase 4.0 with N1QL, there are different consistency levels you can specify when querying which correspond to different cost for updates/changes to propagate through index recalculation. These aren't tied to whether or not data is persisted, but rather it's an option when you issue the query. The default is "not bounded" and to make sure that your upsert request is taken into consideration, you'll want to issue this query as "request plus".
To get the effect you're looking for, you'll want to add N1qlPararms on your creation of the N1qlQuery by using another form of the simple() method. Add a N1qlParams with ScanConsistency.REQUEST_PLUS. You can read more about this in Couchbase's Developer Guide. There's a Java API example of this. With that change, you won't need to have a sleep() in there, the system will automatically service the query request once the index recalculation has gotten to your specified level.
Depending on how you're using this elsewhere in your application, there are times you may want either consistency level.
You need stronger scan consistency. Add a N1qlParam to the query, using consistency(ScanConsistency.REQUEST_PLUS)
I have a requirement in which I need to capture data changes (not auditing) and life cycle states on inventory.
Technology:
Jave, Oracle, Hibernate + JPA
For the data changes, we have been given a list of data elements that are to be monitored. If the element changes we are to notify a given 3rd party vendor. What I want to do is make this a generic service that we can provide to any of our current and future 3rd party vendors.
We don't care who made the change or what the new value is just that it changed.
The thought is that the data layer of our application would use annotation on each of the data elements. If that data element changed, then it would place a message into a queue. The message bean would then read the queue and make an entry in a table.
Table to look something like the following:
Table Name: ATL_CHANGE_TRACKER
Key columns
INVENTORY_ID Inventory Id of the vehicle
SALEEVENT_ITEM_ID SaleEvent item of the vehicle
FIELD_CHANGED_ID Id of the field that got changed or action. Link to subscription
UPDATE_DTM Indicates the date time when change occured.
For a given inventory, we could have up to 200 entries in this table (monitoring 200 fields across many tables).
Then a daemon for the given 3rd party would then read from this table based on the fields that it has subscribed to (could be all the fields). It would then read what every table it is required to to create the message to be sent to the 3rd party. Decouple the provider of the data and the user of the data.
Identify the list of fields/actions that are available
Table Name: ATL_FIELD_ACTION
Key columns
ID
NAME Name of the field/action - Example Color,Make
REC_CRE_TIME_STAMP
REC_CRE_USER_ID
LAST_UPDATE_USER_ID
LAST_UPDATE_TIME_STAMP
Subscription table, if 3rd Party company xyz is interested in 60 fields, the 60 fields will be mapped to this table.
ATL_FIELD_ACTION_SUBSCRIPTION
Key columns
ATL_FIELD_ACTION_ ID ID of the atl_field_action table
CONSUMER 3rd Party Name
FUNCTION Name of the 3rd Party Transmission that it is used for
STATUS
REC_CRE_TIME_STAMP
REC_CRE_USER_ID
LAST_UPDATE_USER_ID
LAST_UPDATE_TIME_STAMP
The second part is that there will be actions on the life cycle of the inventory which will need to be recored also. In this case, when the state of the inventory changes a message will be placed on the same queue and that entry will be entered in the same table.
Again, the daemon will have subscribed to these states and will collect the ones it is interested in.
The goal here is to not have the business tier/data tier care who wants the data - just that it needs to provide it so those interested can get it.
Wonder if anyone has done something like this - any gotchas - off the shelf - open source solutions to do this.
For a high-level discussion on the topic, I would suggest reading this article by Martin Fowler.
Its sounds like you have write-once, read-many type of data, it might produce large volumes of data, and the data is different for different clients. If you ask me, it sounds like this may be a good place to make use of either a NOSQL database or hack your Oracle database to act as a NOSQL database. See here for a discussion on how someone did this with MySQL.
Otherwise, you may look at creating an "immutable" database table and have Hibernate write new records every time it does an update as described here.
Couple things.
First, you get to do all of this work yourself. The JPA/Hibernate lifecycle listeners, while they have an event for when an update occurs, you aren't passed the "old" object and the "new" object. So, you're going to have to keep track of what fields change using some other method.
Second, again with lifecycle listeners, be careful inside of them, as the transaction state is a bit murky. At least on Glassfish/EclipseLink, I've had "strange" problems using either the JPA or JMS from a lifecycle listener. Just weird behavior. We went to a non-transactional queue to capture all of our information that we track from the lifecycle events.
If having the change data committed on its own transaction is acceptable, then there is value is pushing the data on to a faster, internal queue (which can feed a listener that posts it to an MDB). This just gets the auditing "out of band" with your transaction, give you better transaction throughput. But if you need to have the change information committed with the same transaction, this won't work. For example, you could put something on the queue and then the transaction may be rolled back (for whatever) reason, leaving the change on the queue showing it happened, when it in fact failed. That's a potential issue with this.
But if you're posting a lot of audit information, then this can be a concern.
If the auditing information has a short life span (with respect to the rest of the data), then you should probably make an effort to cull the audit tables, they can get pretty large.
Also, if practical, don't disregard the use of DB triggers for this. They can be quite efficient and effective at this process.