Salesforce update master record when child objects updated - java

I have a program that pulls Salesforce Case objects, and their CaseComment and Solution objects. Also I have set of filters that allow me to reduce results (keywords, fromDate, toDate, etc.). The problem that I have is Salesforce functionality do not update neither Case LastModifiedDate nor SystemModstamp fields when I edit or create new comment for that Case.
The most straightforward solution to pull separately Cases, Comments and Solution then extract ParentId (CaseId) from Comments and Solutions, manually modify these CaseIDs with the max lastModifiedDate of Case or Solution and after that merge all Cases. But this process a bit routine, so I looking for another solution, both Salesforce and client sides.

If you want to avoid using Triggers you can use Workflow to do the 'touching'. As of the Spring '12 release of salesforce.com cross-object workflow is supported. So you can create a workflow rule on the case comment that updates a field on the parent case. You could create a custom field specifically for this touching process or reuse any other field.
For example to use the Case Description field as the touched field you could do the following.
Create a new workfow rule over the Case Comment object to fire whenever a record is created or edited.
Specify the criteria for the workflow rule to be when the created date is not equal to null.
Create a new workflow action for a field update.
Specify that the object should be Case and the Field Description
In the Formula enter Parent.Description as the value. This will set the Case Description to be its own value. Effectively making no change to the record.
With regards changing the LastModifiedDate or SysetemModstamp via the API I'm not sure that this is something you can do as part of an ongoing interface. Salesforce will allow you to update these audit fields via the API but you have to contact them to enable the functionality.
The salesforce online documentation covers the audit fields in more detail. It says:
If you import data into Salesforce and need to set the value for an audit field, contact salesforce.com. Once salesforce.com enables this capability for your organization, you can set audit field values for the following objects: Account, CampaignMember, Case, CaseComment, Contact, FeedComment, FeedItem, Idea, IdeaComment, Lead, Opportunity, and Vote. The only audit field you cannot set a value for is systemModstamp.

Simplest way I can think of is to just "touch" (update without making any modifications to the data) the Case record whenever a CaseComment is created or edited. This can be accomplished with a trigger on CaseComment:
trigger CaseCommentAIAU on CaseComment (after insert, after update) {
Set<Id> caseIds = new Set<Id>();
for ( CaseComment cc : Trigger.new ) {
caseIds.add(cc.ParentId);
}
Case[] caseUpdates = [select id from Case where Id in :caseIds];
update caseUpdates;
}

Related

How to detect changes in records of Dataverse tables

I have a requirement to fetch the records from the Dataverse in which some changes have been done in specif columns values. For example, let's say we have a table named employee in which we have a field called position which can be changed over time from intern, software developer, development lead, etc. If we have 10 records currently and if the position of one of the employees gets changed, I need only that one employee record. I have gone through Retrieve and detect changes to table definitions but I believe it is related to changes in the schema and not related changes in the data. I am using the Spring Boot with Java 11 and to work with Dataverse I am using the Olingo library and also may use the Web APIs if required. Is there a way to detect the changes in the data as described above?
EDIT
To add more details we will have a scheduled job that triggers at X minutes which needs to fetch the employee data for which position has changed from the last fetch time of X minutes. As we can see in the image below, all 3 records are being updated in that X minutes internal and last modified time has been updated for all. I need to fetch the records highlighted in green for which position attribute has changed. For a record with Id 2, I don't need to fetch it as the position is the same.
Solution 1: Custom changes table
If you may and can extend your current Dataverse environment
Create a new table called Employee Change. Add a column of type Lookup named Employee and link it to your Employee table
Modify Main Form and add Employee column to the form
Create a workflow process which would fire on field change. Inside the workflow process you create an Employee Change record and set lookup column value to the changed record
You can now query Employee Change table for changed records. You would need to expand the lookup column to get required columns from Employee table.
Example Web API query:
GET [Organization URI]/api/data/v9.1/employee?$select=createdon, employeeid
&$expand=employeeid($select=employeeid,fullname,column2,column3) HTTP/1.1
Accept: application/json
OData-MaxVersion: 4.0
OData-Version: 4.0
More info on expanding lookup columns can be found here
Solution 2: Auditing
Use built-in auditing feature
Make sure auditing is enabled. Details can be found in docs
Enable auditing on required column in Employee table
Query audit records for changes in Employee table. You have to pay attention only to changes to specific attributes of interest
You will get a list of records changed and then you have to query once again to retrieve columns of the records
Solution 3: Push instead of pull
It might make more sense to push changes from Dataverse to your API instead of constantly querying for changes.
You could use Microsoft Power Automate to create a simple flow which would call your API / platform when change is detected in Dataverse
A good start could be exploring the following Power Automate template: When a record is updated in Microsoft Dataverse, send an email. You could then replace "send email" steps with querying other APIs

NetSuite SuiteTalk SOAP Api: Update Certain Fields ONLY on a Record

We are hitting a problem with updating a NetSuite Sales Order (specifically we're updating a custom field) but are bouncing off some Read-only fields that we don't even explicitly write to in our code.
We retrieve the order, update the custom field and then call WriteResponse rc = this.nsPort.update(order); where order is an instance of SalesOrder pulled by internalID and nsPort is an instance of NetSuitePortType. The call to update() fails with an exception:
java.lang.Exception: You do not have permissions to set a value for element subtotal
due to one of the following reasons: 1) The field is read-only; 2) An associated feature
is disabled; 3) The field is available either when a record is created or updated, but
not in both cases.
Which field is read-only is immaterial here, it just matters that we're (unintentionally) sending an update back that involves read-only fields.
It strikes me that we'd ideally only send an update that writes to just the custom field we're interested in.
Is there any way to pull an a record from NetSuite and then update just certain fields? Or is there a way to inform SuiteTalk to just update certain fields when we call update()?
If you just want to update certain fields, only send those fields that you want updated along with the internalId. For example, to update just the memo, and a custom field on a sales order use (in python):
sales_order = soap.sales.SalesOrder(
internalId=12,
memo='I updated the memo, but I did not shoot the deputy',
customFieldList=soap.core.CustomFieldList(customField=[
soap.core.CustomString(scriptId='custbody_memo', value='I also did not shoot the deputy'),
])
)
soap.update(sales_order)
This will generate an xml that includes only the internalId, the memo, the custom body memo field. Netsuite will only update those fields included in the soap message.

Is it a good idea to have unique keys to better aggregate data in MongoDB

Hello I am creating an app where people essentially join groups to do tasks, and each group has a unique name. I want to be able to update each of the users document that has to do with a specific group without having to for loop each user and update with each iteration.
I want to know if its a good idea to have a unique key like this in mongoDB.
{
...
"specific_group_name": (whatever data point here)
...
}
in each of the users document, so I can just call a simple
updateToMany(eq("specific_group_name", (whatever data point here)), Bson object)
To decrease the run time that is involved, just in case there is alot of users within the group.
Thank you
Just a point to note, instead of a specific group name, better make sure that it's specific groupId. Also pay special attention to cases when you have to remove group from the people, and also if there's cases when a person in a particular group shouldn't receive this update.
What you want to do is entirely valid though. If you put specific_group_name/id in the collection, then you're moving the selection logic to db. If you're doing a one-by-one update, then you have more flexibility on how to select users to update on Java/application side.
If selection is simple (a.k.a always update people in this group) then go ahead

How to I make sure my N1QL Query considers recent changes?

My situation is that, given 3 following methods (I used couchbase-java-client 2.2 in Scala. And Version of Couchbase server is 4.1):
def findAll() = {
bucket.query(N1qlQuery.simple(select("*").from(i(DatabaseBucket.USER))))
.allRows().toList
}
def findById(id: UUID) = {
Option(bucket.get(id.toString, classOf[RawJsonDocument])).map(i => read[User](i.content()))
}
def upsert(i: User) = {
bucket.async().upsert(RawJsonDocument.create(i.id.toString, write(i)))
}
Basically, they are insert, find one by id and findAll. I did an experiment where :
I insert a User, then find one by findById right after that, I got a user that I have inserted correctly.
I insert and then I use findAll right after that, it returns empty.
I insert, put 3 seconds delay and then I use findAll, I can find the one that I have inserted.
By that, I suspected that N1qlQuery only search over cached layer rather than "persist" layer. So, how can I force to let it search on "persist" layer?
In Couchbase 4.0 with N1QL, there are different consistency levels you can specify when querying which correspond to different cost for updates/changes to propagate through index recalculation. These aren't tied to whether or not data is persisted, but rather it's an option when you issue the query. The default is "not bounded" and to make sure that your upsert request is taken into consideration, you'll want to issue this query as "request plus".
To get the effect you're looking for, you'll want to add N1qlPararms on your creation of the N1qlQuery by using another form of the simple() method. Add a N1qlParams with ScanConsistency.REQUEST_PLUS. You can read more about this in Couchbase's Developer Guide. There's a Java API example of this. With that change, you won't need to have a sleep() in there, the system will automatically service the query request once the index recalculation has gotten to your specified level.
Depending on how you're using this elsewhere in your application, there are times you may want either consistency level.
You need stronger scan consistency. Add a N1qlParam to the query, using consistency(ScanConsistency.REQUEST_PLUS)

HTTP PUT to update database using partial Json - Spring, Postgres

I am writing a web api using Spring and Postgres.
I have a case where I take a Json object item
The uri is /api/item/{itemId}
Request type is PUT
Json:
{
"name":"itemname",
"description":"item description here"
}
So I do (using JdbcTemplate) a SELECT statement to check if the itemId exists and then update if it does.
I would also like to implement a case with partial puts taking Json that look like this::
{
"name":"itemname"
}
OR
{
"description":"item description here"
}
where only the respective fields are updated. In Spring, the variables not present are automatically null.
The way this is implemented now is:
SELECT all columns from the items table
Sequentially check every single expected variable for null and if they are null, replace the null with the value selected from the table in step 1.
UPDATE all columns with the values (none of which should be null if the table has a not null constraint)
Question: How do you do this without == null or != null checks? Is seems to be poor design and involves iterating through every single expected variable for every single PUT request (I will have many of those).
Desired responses (in order of desirability):
There's a way in Postgres where if a null value is input, the column-value is simply not written to the database (and no error is produced)
There is a way to use Spring (and Jackson) to create a Java object with only the provided values and a way to generate SQL and JdbcTemplate code that only updates those specific columns.
Patch is the way of life - implement it
Change the front-end to always send everything
You have two choices when working with the database:
Just update what has changed, doing everything by yourself.
Get Jackson and Hibernate to do it for you.
So let's look at No. 1:
Let's say you're looking right now at the contents of an html form that has been sent back to the server. Take every field in the html form and update only those fields in the database using an SQL statement. Anything that is not in the form will not get updated in your database table. Simple! Works well. You don't need to worry about anything that is not in the form. You can restrict your update to the form's contents.
In general this is a simple option, apart from one problem, which is html checkboxes. If you are doing an update, html checkboxes can catch you out because due to a little design quirk, they don't get sent back to the server if they are unchecked.
No. 2: Perhaps you're looking for Hibernate, which you didn't mention. Jackson will fill a json object for you (must have a record id). Use Hibernate to populate a java class with the existing record, update with the new values Jackson has provided, then you tell Hibernate to merge() it into the existing record in the database, which it will. I also use Roo to create my Hibernate-ready classes for me.
No. 2 is hard to learn and set up, but once you've done sussed it, it's very easy to change things, add fields and so on.

Categories

Resources