I'am writing a software for the data-synchronization of a custom software and sugarCRM. Therefore I need an updateOrCreate() function. My Problem is, that the custom software uses other uuid´s than sugarCRM so i can´t look for the uuid to check on update or create.So I want to save the custom-uuid in a custom field of sugarCRM.
But i have no idea how to do that over the REST-API of sugarCRM.
By the way I wrote a java-application.
Thank you for help!
As far as I'm aware there is no update-or-create API (see https://your-sugarsite/rest/v10/help), howewer if you just want to use the API (rather than customize it) you could sync data like this:
1) Fetch all ids of records that have a custom uuid by using the POST /rest/v10/<module>/filter endpoint and a payload similar to:
{
offset: 0,
max_num: 1000,
fields: ["id", "custom_uuid_c"],
filter: [{"custom_uuid_c": {"$not_empty": ""}}],
]
}
or if you just need a specific custom uuid at a time:
{
offset: 0,
max_num: 1000,
fields: ["id"],
filter: [{"custom_uuid_c": {"$equals": "example-custom-uuid"}}],
]
}
The response will look something like this:
{
next_offset: -1,
records: [
{"id": "example-sugar-uuid", "custom_uuid_c": "example-custom-uuid"},
...
],
}
Notes:
Make sure to evaluate next_offset as even with a high max_num you may not get all records at once because of server limits. As long as next_offset isn't -1 you should use its value as offset in a new request to get the remaining records.
You can supply all field names you need to sync in the fields array, so that you get that information early and can check whether or not an update is required at all (maybe data is still up-to-date?)
Sugar also always include certain fields in the response, no matter if they were requested or not. (E.g. id and date_modified). I did not include them all in the response snippets for the sake of simplicity.
2)
Based on the information received in the previous step you know which sugar ID belongs to which custom UUID and you can detect/prepare data for updates.
If you need to sync all and retrieve the complete list first, I suggest you create a lookup table custom-uuid => sugar-id, so that you do not have to loop through the data array and compare fields when looking for a specific number.Don't forget to consider the possibility of a custom-uuid being present in one than more Sugar-record at a time, unless you enforce them being unique on the server/database side.
3)
Now that you have all the information you need you can update and create records as needed:
Update existing record: PUT /rest/v10/<module>/<record_id>
Create missing record: POST /rest/v10/<module>
If want to send a lot of creates and/or updates in a single request, have a look at the POST /rest/v10/bulk API - if your version of Sugar has it.
Final notes:
The filter operators definition on /rest/v10/help seems incomplete, for more info you can check the filter docs
Related
Using elasticsearch 8.4.3 with Java 17 and a cluster of 3 nodes where 3 are master eligible, we start with following situation:
index products-2023-01-12-0900 which has an alias current-products
We then start a job that creates a new index products-2023-01-12-1520 and at the end using elastic-rest-client on client side and alias API, we make this call:
At 2023-01-12 16:27:26,893:
POST /_aliases
{"actions":[
{
"remove": {
"alias":"current-products",
"index":"products-*"
}
},
{
"add":{
"alias":"current-products",
"index":"products-2023-01-12-1520"}
}
]}
And we get the following response 26 millis after with HTTP response code 200:
{"acknowledged":true}
But looking at what we end up with, we still have old index with current-products alias.
I don't understand why it happens, and it does not happen 100% of the time (it happened 2 times out of around 10 indexations).
Is it a known bug ? or a regular behaviour ?
Edit for #warkolm:
GET /_cat/aliases?v before indexation as of now:
alias index filter routing.index routing.search is_write_index
current-products products-2023-01-13-1510 - - - -
It appears that there might be an issue with the way you are updating the alias. When you perform a POST request to the _aliases endpoint with the "remove" and "add" actions, Elasticsearch will update the alias based on the current state of the indices at the time the request is executed.
However, it is possible that there are other processes or actions that are also modifying the indices or aliases at the same time, and this can cause conflicts or inconsistencies. Additionally, when you use the wildcard character (*) in the "index" field of the "remove" action, it will remove the alias from all indices that match the pattern, which may not be the intended behavior.
To avoid this issue, you could try using the Indices Aliases API instead of the _aliases endpoint. This API allows you to perform atomic updates on aliases, which means that the alias will only be updated if all actions succeed, and will roll back if any of the actions fail. Additionally, instead of using the wildcard character, you can explicitly specify the index that you want to remove the alias from.
Here is an example of how you could use the Indices Aliases API to update the alias:
POST /_aliases
{
"actions": [
{ "remove": { "index": "products-2023-01-12-0900", "alias": "current-products" } },
{ "add": { "index": "products-2023-01-12-1520", "alias": "current-products" } }
]
}
This way, the alias will only be removed from the specific index "products-2023-01-12-0900" and added to the specific index "products-2023-01-12-1520". This can help avoid any conflicts or inconsistencies that may be caused by other processes or actions that are modifying the indices or aliases at the same time.
Additionally, it is recommended to use a version of elasticsearch that is equal or greater than 8.4.3, as it has many bug fixes that might be the cause of the issue you are facing.
In conclusion, the issue you are encountering may not be a known bug but it's a regular behavior if multiple processes are modifying the indices or aliases at the same time, and using the Indices Aliases API and specifying the exact index to remove or add the alias can help avoid this issue.
I have a list of status enum values which I am currently iterating over and using a basic counter to store how many in my list have the specific value that I am looking for. I want to improve greatly on this however and think that there may be a way to use some kind of JPA query on a paging and sorting repository to accomplish the same thing.
My current version which isn't as optimized as I would like is as follows.
public enum MailStatus {
SENT("SENT"),
DELETED("DELETED"),
SENDING("SENDING"),
}
val mails = mailService.getAllMailForUser(userId).toMutableList()
mails.forEach { mail ->
if (mail.status === MailStatus.SENT) {
mailCounter++
}
}
With a paging and sorting JPA repository is there some way to query this instead and get a count of all mail that has a status of sent only?
I tried the following but seem to be getting everything rather than just the 'SENT' status.
fun countByUserIdAndMailStatusIn(userId: UUID, mailStatus: List<MailStatus>): Long
Im using testnet to validate my transaction, transaction :
{"transaction":"ECAB482EB34177FA1B1E6C724F038C42308004B1F307A169FAEA88C825E11642","command":"tx","id":0}
Response :
{id=0, status='success', errorMessage='null', result=TxResult{validated=false}}
Im using websocket , method 'tx' to check. What is the best course of action to figure out problem, is there a way to see reason this is not validated on some of the testnet validators?
Im connected to wss://s.altnet.rippletest.net:51233 ,address i use is rKHDh61BpcojAoiATgJgDaVwdSJ64fGNwF. Can someone help?
Fee is at 1 000 000 drops. This is the transaction blob 1200002200000000240000000061D4838D7EA4C680000000000000000000000000005553440000000000C882FD6AB9862C4F90E57E1BA15C248CABAD5BF96840000000000F42407321033BF063167F21FF6C01045B4E2F03F519879B552D2611F0E885E01F08C88D15247446304402202E90609AAFBF4C105408CFF2377D48085879BEE3C7DE57AF125F73926284362A022002D7A487F5929F9A3E1050FC2B5D6AE1DD5384647AD1ABF6D322765F0ABE0A498114C882FD6AB9862C4F90E57E1BA15C248CABAD5BF983148DC6B336C7D3BE007297DB086B1D3483DEA24C2A
Is my transaction fualty ? Then why was it corrently submited to the network ? Seems like its valid, bud why its not validated and hence finalized in ledger?
Note : responses use my internal model to represent some
properties, hence thats why names might be slightly different and some properties ommited.
Result from 'submit' call :
Result :SubmitResult{engineResult='tefPAST_SEQ', engineResultCode=-190, engineResultMessage='This sequence number has already passed.', txBlob='1200002200000000240000000061D4838D7EA4C680000000000000000000000000005553440000000000C882FD6AB9862C4F90E57E1BA15C248CABAD5BF96840000000000F42407321033BF063167F21FF6C01045B4E2F03F519879B552D2611F0E885E01F08C88D15247446304402202E90609AAFBF4C105408CFF2377D48085879BEE3C7DE57AF125F73926284362A022002D7A487F5929F9A3E1050FC2B5D6AE1DD5384647AD1ABF6D322765F0ABE0A498114C882FD6AB9862C4F90E57E1BA15C248CABAD5BF983148DC6B336C7D3BE007297DB086B1D3483DEA24C2A', txJson=TxJson{transactionType='Payment', account='rKHDh61BpcojAoiATgJgDaVwdSJ64fGNwF', destination='rDveJyEotoUp9jCD1Ghi2ktEBnhHiA6RBB', amount=Amount{currency='USD', value=1, issuer='rKHDh61BpcojAoiATgJgDaVwdSJ64fGNwF'}, fee='1000000', flags=0, sequence=0, signingPubKey='033BF063167F21FF6C01045B4E2F03F519879B552D2611F0E885E01F08C88D1524', txnSignature='304402202E90609AAFBF4C105408CFF2377D48085879BEE3C7DE57AF125F73926284362A022002D7A487F5929F9A3E1050FC2B5D6AE1DD5384647AD1ABF6D322765F0ABE0A49', hash='ECAB482EB34177FA1B1E6C724F038C42308004B1F307A169FAEA88C825E11642'}}
I submited it few times , so 'tefPAST_SEQ' is present.
Looks like your transaction object has sequence field in it.
According to THIS your sequence can be auto-filled. It can be set manually in case you'd want to submit multiple transactions at once by incrementing them manually.
This gives you control over the order of the transactions to be executed in certain order.
If that doesn't matter you can just go without setting the sequence.
In your case your account looks like this (using account_info):
{
"result": {
"account_data": {
"Account": "rKHDh61BpcojAoiATgJgDaVwdSJ64fGNwF",
"Balance": "10000000000",
"Flags": 0,
"LedgerEntryType": "AccountRoot",
"OwnerCount": 0,
"PreviousTxnID": "12CA4E5AAF4198155FF3F16E53D35353B051F4AB5E01749833202339B48D187A",
"PreviousTxnLgrSeq": 11450559,
"Sequence": 1,
"index": "169B6BA91A54B2EC86EFB618995A59E76F07853BB88AF231776118339FFD7268"
},
"ledger_hash": "449E3420C6B1C6959FA794066264432EF4E98543B0C6582B00D6CD28DE33B8F8",
"ledger_index": 11523855,
"status": "success",
"validated": true
}
See the result.account_data.Sequence being 1?
The reason you're seeing This sequence number has already passed is you've set sequence=0 in your transaction. (provided by Result from 'submit' call :)
On a side note, I see you've set currency='USD' which means you have to open a trust line first. your account currently has 0 account_lines
Either way, good luck using XRP ;)
I came across this blog post in looking for a way to organize relationships. What I'm getting confused on is the syntax behind the following statement. I realize by virtue of the javascript variables, the following is possible..
var party = {
_id: "chessparty",
name: "Chess Party!",
attendees: ["seanhess", "bob"]
}
var user = { _id: "seanhess", name: "Sean Hess", events: ["chessparty"]}
db.events.save(party)
db.users.save(user)
db.events.find({_id: {$in: user.events}}) // events for user
db.users.find({_id: {$in: party.attendees}}) // users for event
What is throwing me for a spin in the last two lines though, since what I'm trying to do is something like this in Java. So I understand the idea, but I want to accomplish this in Java, more specifically, the Camel/MongoDB component.
I've been referencing the following documentation and looking at the "findAll" operation. So would I need to first run a query to get the array, for example the "user.events" and then run a second query to find the list of events? Or is there a way to reference the field "events" in collection "db.user" as part of the query on "db.events"?
Something to the tune of the following with a single query..
pseudo idea: db.events.find({_id: {$in: [db.user.events]}})
Ultimately I'm looking to translate this into something like the following..
from("direct:findAll")
.setBody().constant("{ \"_id\": {$in :\"user.events\" }}")
.to("mongodb:myDb?database=sample&collection=events&operation=findAll")
.to("mock:resultFindAll");
I'm a bit new to the mongodb camel component, so I'm wondering if there are any gurus that have already been there done that sort of thing?? And have any advice on the subject. Or to find out without 2 days of trial and error that this simple isn't possible..?
Thanks!
I thought I'd wrap this question up, it has been some time now and a few weeks ago I was able to work past this.
Basically I would up storing an array of userId's in the events collection..
example:
{
_id : 22bjh2345j2k3v235,
eventName : "something",
eventDate : ISODate(...),
attendees : [
"abc123",
"def098",
"etc..."
]
}
essentially assigning users to events. This way I could find all events a user was participating in, and I wound up with a list of users per event.
if I wanted to find all events for a user:
from("direct:findAll")
.setBody().simple("{ \"attendees\": \"${header.userId}\" }")
.to("mongodb:myDb?database=sample&collection=events&operation=findAll")
.to("mock:resultFindAll");
I am new to ElasticSearch and Couchbase. I am building a sample Java application to learn more about ElasticSearch and Couchbase.
Reading the ElasticSearch Java API, Filters are better used in cases where sort on score is not necessary and for caching.
I still haven't figured out how to use FilterBuilders and have following questions:
Can FilterBuilders be used alone to search?
Or Do they always have to be used with a Query? ( If true, can someone please list an example? )
Going through a documentation, if I want to perform a search based on field values and want to use FilterBuilders, how can I accomplish that? (using AndFilterBuilder or TermFilterBuilder or InFilterBuilder? I am not clear about the differences between them.)
For the 3rd question, I actually tested it with search using queries and using filters as shown below.
I got empty result (no rows) when I tried search using FilterBuilders. I am not sure what am I doing wrong.
Any examples will be helpful. I have had a tough time going through documentation which I found sparse and even searching led to various unreliable user forums.
private void processQuery() {
SearchRequestBuilder srb = getSearchRequestBuilder(BUCKET);
QueryBuilder qb = QueryBuilders.fieldQuery("doc.address.state", "TX");
srb.setQuery(qb);
SearchResponse resp = srb.execute().actionGet();
System.out.println("response :" + resp);
}
private void searchWithFilters(){
SearchRequestBuilder srb = getSearchRequestBuilder(BUCKET);
srb.setFilter(FilterBuilders.termFilter("doc.address.state", "tx"));
//AndFilterBuilder andFb = FilterBuilders.andFilter();
//andFb.add(FilterBuilders.termFilter("doc.address.state", "TX"));
//srb.setFilter(andFb);
SearchResponse resp = srb.execute().actionGet();
System.out.println("response :" + resp);
}
--UPDATE--
As suggested in the answer, changing to lowercase "tx" works. With this question resolved. I still have following questions:
In what scenario(s), are filters used with query? What purpose will this serve?
Difference between InFilter, TermFilter and MatchAllFilter. Any illustration will help.
Right, you should use filters to exclude documents from being even considered when executing the query. Filters are faster since they don't involve any scoring, and cacheable as well.
That said, it's pretty obvious that you have to use a filter with the search api, which does execute a query and accepts an optional filter. If you only have a filter you can just use the match_all query together with your filter. A filter can be a simple one, or a compund one in order to combine multiple filters together.
Regarding the Java API, the names used are the names of the filters available, no big difference. Have a look at this search example for instance. In your code I don't see where you do setFilter on your SearchRequestBuilder object. You don't seem to need the and filter either, since you are using a single filter. Furthermore, it might be that you are indexing using the default mappings, thus the term "TX" is lowercased. That's why when you search using the term filter you don't find any match. Try searching for "tx" lowercased.
You can either change your mapping if you want to keep the "TX" term as it is while indexing, probably setting the field as not_analyzed if it should only be a single token. Otherwise you can change filter, you might want to have a look at a query that is analyzed, so that your query wil be analyzed the same way the content was indexed.
Have a look at the query DSL documentation for more information regarding queries and filters:
MatchAllFilter: matches all your document, not that useful I'd say
TermFilter: Filters documents that have fields that contain a term (not analyzed)
AndFilter: compound filter used to put in and two or more filters
Don't know what you mean by InFilterBuilder, couldn't find any filter with this name.
The query usually contains what the user types in through the text search box. Filters are more way to refine the search, for example clicking on facet entries. That's why you would still have the query plus one or more filters.
To append to what #javanna said:
A lot of confusion can come from the fact that filters can be defined in several ways:
standalone (with a required query, for instance match_all if all you need is the filters) (http://www.elasticsearch.org/guide/reference/api/search/filter/)
or as part of a filtered query (http://www.elasticsearch.org/guide/reference/query-dsl/filtered-query/)
What's the difference you might ask. And indeed you can construct exactly the same logic in both ways.
The difference is that a query operates on BOTH the resultset as well as any facets you have defined. Whereas, a Filter (when defined standalone) only operates on the resultset and NOT on any facets you may have defined (explained here: http://www.elasticsearch.org/guide/reference/api/search/filter/)
To add to the other answers, InFilter is only used with FilterBuilders. The definition is, InFilter: A filter for a field based on several terms matching on any of them.
The query Java API uses FilterBuilders, which is a factory for filter builders that can dynamically create a query from Java code. We do this using a form and we build our query based on user selections from it with checkboxes, options, and dropdowns.
Here is some Example code for FilterBuilders and there is a snippet from that link that uses InFilter as shown below:
FilterBuilder filterBuilder;
User user = (User) auth.getPrincipal();
if (user.getGroups() != null && !user.getGroups().isEmpty()) {
filterBuilder = FilterBuilders.boolFilter()
.should(FilterBuilders.nestedFilter("userRoles", FilterBuilders.termFilter("userRoles.key", auth.getName())))
.should(FilterBuilders.nestedFilter("groupRoles", FilterBuilders.inFilter("groupRoles.key", user.getGroups().toArray())));
} else {
filterBuilder = FilterBuilders.nestedFilter("userRoles", FilterBuilders.termFilter("userRoles.key", auth.getName()));
}
...