Ripple XRP Ledger - Cant get transaction validated (Testnet) - java

Im using testnet to validate my transaction, transaction :
{"transaction":"ECAB482EB34177FA1B1E6C724F038C42308004B1F307A169FAEA88C825E11642","command":"tx","id":0}
Response :
{id=0, status='success', errorMessage='null', result=TxResult{validated=false}}
Im using websocket , method 'tx' to check. What is the best course of action to figure out problem, is there a way to see reason this is not validated on some of the testnet validators?
Im connected to wss://s.altnet.rippletest.net:51233 ,address i use is rKHDh61BpcojAoiATgJgDaVwdSJ64fGNwF. Can someone help?
Fee is at 1 000 000 drops. This is the transaction blob 1200002200000000240000000061D4838D7EA4C680000000000000000000000000005553440000000000C882FD6AB9862C4F90E57E1BA15C248CABAD5BF96840000000000F42407321033BF063167F21FF6C01045B4E2F03F519879B552D2611F0E885E01F08C88D15247446304402202E90609AAFBF4C105408CFF2377D48085879BEE3C7DE57AF125F73926284362A022002D7A487F5929F9A3E1050FC2B5D6AE1DD5384647AD1ABF6D322765F0ABE0A498114C882FD6AB9862C4F90E57E1BA15C248CABAD5BF983148DC6B336C7D3BE007297DB086B1D3483DEA24C2A
Is my transaction fualty ? Then why was it corrently submited to the network ? Seems like its valid, bud why its not validated and hence finalized in ledger?
Note : responses use my internal model to represent some
properties, hence thats why names might be slightly different and some properties ommited.
Result from 'submit' call :
Result :SubmitResult{engineResult='tefPAST_SEQ', engineResultCode=-190, engineResultMessage='This sequence number has already passed.', txBlob='1200002200000000240000000061D4838D7EA4C680000000000000000000000000005553440000000000C882FD6AB9862C4F90E57E1BA15C248CABAD5BF96840000000000F42407321033BF063167F21FF6C01045B4E2F03F519879B552D2611F0E885E01F08C88D15247446304402202E90609AAFBF4C105408CFF2377D48085879BEE3C7DE57AF125F73926284362A022002D7A487F5929F9A3E1050FC2B5D6AE1DD5384647AD1ABF6D322765F0ABE0A498114C882FD6AB9862C4F90E57E1BA15C248CABAD5BF983148DC6B336C7D3BE007297DB086B1D3483DEA24C2A', txJson=TxJson{transactionType='Payment', account='rKHDh61BpcojAoiATgJgDaVwdSJ64fGNwF', destination='rDveJyEotoUp9jCD1Ghi2ktEBnhHiA6RBB', amount=Amount{currency='USD', value=1, issuer='rKHDh61BpcojAoiATgJgDaVwdSJ64fGNwF'}, fee='1000000', flags=0, sequence=0, signingPubKey='033BF063167F21FF6C01045B4E2F03F519879B552D2611F0E885E01F08C88D1524', txnSignature='304402202E90609AAFBF4C105408CFF2377D48085879BEE3C7DE57AF125F73926284362A022002D7A487F5929F9A3E1050FC2B5D6AE1DD5384647AD1ABF6D322765F0ABE0A49', hash='ECAB482EB34177FA1B1E6C724F038C42308004B1F307A169FAEA88C825E11642'}}
I submited it few times , so 'tefPAST_SEQ' is present.

Looks like your transaction object has sequence field in it.
According to THIS your sequence can be auto-filled. It can be set manually in case you'd want to submit multiple transactions at once by incrementing them manually.
This gives you control over the order of the transactions to be executed in certain order.
If that doesn't matter you can just go without setting the sequence.
In your case your account looks like this (using account_info):
{
"result": {
"account_data": {
"Account": "rKHDh61BpcojAoiATgJgDaVwdSJ64fGNwF",
"Balance": "10000000000",
"Flags": 0,
"LedgerEntryType": "AccountRoot",
"OwnerCount": 0,
"PreviousTxnID": "12CA4E5AAF4198155FF3F16E53D35353B051F4AB5E01749833202339B48D187A",
"PreviousTxnLgrSeq": 11450559,
"Sequence": 1,
"index": "169B6BA91A54B2EC86EFB618995A59E76F07853BB88AF231776118339FFD7268"
},
"ledger_hash": "449E3420C6B1C6959FA794066264432EF4E98543B0C6582B00D6CD28DE33B8F8",
"ledger_index": 11523855,
"status": "success",
"validated": true
}
See the result.account_data.Sequence being 1?
The reason you're seeing This sequence number has already passed is you've set sequence=0 in your transaction. (provided by Result from 'submit' call :)
On a side note, I see you've set currency='USD' which means you have to open a trust line first. your account currently has 0 account_lines
Either way, good luck using XRP ;)

Related

Changing alias in ElasticSearch returns 200 and acknowledged but does not change alias

Using elasticsearch 8.4.3 with Java 17 and a cluster of 3 nodes where 3 are master eligible, we start with following situation:
index products-2023-01-12-0900 which has an alias current-products
We then start a job that creates a new index products-2023-01-12-1520 and at the end using elastic-rest-client on client side and alias API, we make this call:
At 2023-01-12 16:27:26,893:
POST /_aliases
{"actions":[
{
"remove": {
"alias":"current-products",
"index":"products-*"
}
},
{
"add":{
"alias":"current-products",
"index":"products-2023-01-12-1520"}
}
]}
And we get the following response 26 millis after with HTTP response code 200:
{"acknowledged":true}
But looking at what we end up with, we still have old index with current-products alias.
I don't understand why it happens, and it does not happen 100% of the time (it happened 2 times out of around 10 indexations).
Is it a known bug ? or a regular behaviour ?
Edit for #warkolm:
GET /_cat/aliases?v before indexation as of now:
alias index filter routing.index routing.search is_write_index
current-products products-2023-01-13-1510 - - - -
It appears that there might be an issue with the way you are updating the alias. When you perform a POST request to the _aliases endpoint with the "remove" and "add" actions, Elasticsearch will update the alias based on the current state of the indices at the time the request is executed.
However, it is possible that there are other processes or actions that are also modifying the indices or aliases at the same time, and this can cause conflicts or inconsistencies. Additionally, when you use the wildcard character (*) in the "index" field of the "remove" action, it will remove the alias from all indices that match the pattern, which may not be the intended behavior.
To avoid this issue, you could try using the Indices Aliases API instead of the _aliases endpoint. This API allows you to perform atomic updates on aliases, which means that the alias will only be updated if all actions succeed, and will roll back if any of the actions fail. Additionally, instead of using the wildcard character, you can explicitly specify the index that you want to remove the alias from.
Here is an example of how you could use the Indices Aliases API to update the alias:
POST /_aliases
{
"actions": [
{ "remove": { "index": "products-2023-01-12-0900", "alias": "current-products" } },
{ "add": { "index": "products-2023-01-12-1520", "alias": "current-products" } }
]
}
This way, the alias will only be removed from the specific index "products-2023-01-12-0900" and added to the specific index "products-2023-01-12-1520". This can help avoid any conflicts or inconsistencies that may be caused by other processes or actions that are modifying the indices or aliases at the same time.
Additionally, it is recommended to use a version of elasticsearch that is equal or greater than 8.4.3, as it has many bug fixes that might be the cause of the issue you are facing.
In conclusion, the issue you are encountering may not be a known bug but it's a regular behavior if multiple processes are modifying the indices or aliases at the same time, and using the Indices Aliases API and specifying the exact index to remove or add the alias can help avoid this issue.

SugarCRM custom field

I'am writing a software for the data-synchronization of a custom software and sugarCRM. Therefore I need an updateOrCreate() function. My Problem is, that the custom software uses other uuid´s than sugarCRM so i can´t look for the uuid to check on update or create.So I want to save the custom-uuid in a custom field of sugarCRM.
But i have no idea how to do that over the REST-API of sugarCRM.
By the way I wrote a java-application.
Thank you for help!
As far as I'm aware there is no update-or-create API (see https://your-sugarsite/rest/v10/help), howewer if you just want to use the API (rather than customize it) you could sync data like this:
1) Fetch all ids of records that have a custom uuid by using the POST /rest/v10/<module>/filter endpoint and a payload similar to:
{
offset: 0,
max_num: 1000,
fields: ["id", "custom_uuid_c"],
filter: [{"custom_uuid_c": {"$not_empty": ""}}],
]
}
or if you just need a specific custom uuid at a time:
{
offset: 0,
max_num: 1000,
fields: ["id"],
filter: [{"custom_uuid_c": {"$equals": "example-custom-uuid"}}],
]
}
The response will look something like this:
{
next_offset: -1,
records: [
{"id": "example-sugar-uuid", "custom_uuid_c": "example-custom-uuid"},
...
],
}
Notes:
Make sure to evaluate next_offset as even with a high max_num you may not get all records at once because of server limits. As long as next_offset isn't -1 you should use its value as offset in a new request to get the remaining records.
You can supply all field names you need to sync in the fields array, so that you get that information early and can check whether or not an update is required at all (maybe data is still up-to-date?)
Sugar also always include certain fields in the response, no matter if they were requested or not. (E.g. id and date_modified). I did not include them all in the response snippets for the sake of simplicity.
2)
Based on the information received in the previous step you know which sugar ID belongs to which custom UUID and you can detect/prepare data for updates.
If you need to sync all and retrieve the complete list first, I suggest you create a lookup table custom-uuid => sugar-id, so that you do not have to loop through the data array and compare fields when looking for a specific number.Don't forget to consider the possibility of a custom-uuid being present in one than more Sugar-record at a time, unless you enforce them being unique on the server/database side.
3)
Now that you have all the information you need you can update and create records as needed:
Update existing record: PUT /rest/v10/<module>/<record_id>
Create missing record: POST /rest/v10/<module>
If want to send a lot of creates and/or updates in a single request, have a look at the POST /rest/v10/bulk API - if your version of Sugar has it.
Final notes:
The filter operators definition on /rest/v10/help seems incomplete, for more info you can check the filter docs

How to change SortOrder to avoid "unsupported collating sort order" error?

I've been working on a program with a .mdb database from a third party client. Everything was fine until I've tried to update elements on the database. The sortOrder field is not correct. I've tried to change it to general with MS Access, and had no luck. The message I get when I execute the update query is:
java.lang.IllegalArgumentException: Given index Index#150ab4ed[
name: (EXART) PrimaryKey
number: 2
isPrimaryKey: true
isForeignKey: false
data: IndexData#3c435123[
dataNumber: 2
pageNumber: 456
isBackingPrimaryKey: true
isUnique: true
ignoreNulls: false
columns: [
ReadOnlyColumnDescriptor#50fe837a[
column: Column#636e8cc[
name: (EXART) ARCodArt
type: 0xa (TEXT)
number: 0
length: 30
variableLength: true
compressedUnicode: true
textSortOrder: SortOrder[3082(0)]
]
flags: 1
]
]
initialized: false
pageCache: IndexPageCache#3a62c01e[
pages: (uninitialized)
]
]
] is not usable for indexed lookups due to unsupported collating sort order SortOrder[3082(0)] for text index
at com.healthmarketscience.jackcess.impl.IndexCursorImpl.createCursor(IndexCursorImpl.java:111)
at com.healthmarketscience.jackcess.CursorBuilder.toCursor(CursorBuilder.java:302)
at net.ucanaccess.commands.IndexSelector.getCursor(IndexSelector.java:150)
at net.ucanaccess.commands.CompositeCommand.persist(CompositeCommand.java:83)
at net.ucanaccess.jdbc.UcanaccessConnection.flushIO(UcanaccessConnection.java:268)
at net.ucanaccess.jdbc.UcanaccessConnection.commit(UcanaccessConnection.java:169)
at cultifortgestio.EntradaEixidaDades.Insercio(EntradaEixidaDades.java:76)
As you can see, Access does not change the sortOrder at all, I think it should be 1033, and it keeps being 3082. Is there a way to change this? As i said, changing in Access and performing a Compact and Repair database didn't work for me.
As with other similar situations, the solution was to change the sort order of the affected database. This is usually done by
opening the database in Access,
changing the "New database sort order" (see screenshot below) to "General - Legacy", and then
performing a Compact and Repair Database operation.
However, the wrinkle in this case was that the Windows locale was set to "Spanish", so the "General" sort options in Access do not map to a value that UCanAccess (Jackcess, actually) can update. The solution for the asker was to temporarily change their Windows locale to "English ...", perform the above steps to change the database sort order, and then change the Windows locale back.
For those who would prefer not to mess with their Windows locale settings, an alternative solution would be to have UCanAccess create a new empty database file via the newDatabaseVersion option, e.g.,
String connStr = "jdbc:ucanaccess://C:/someplace/new.accdb;newDatabaseVersion=V2010";
try (Connection conn = DriverManager.getConnection(connStr)) {
}
open the new database in Access, and then transfer the tables from the old database file into the new one using the Import feature. The database file created by UCanAccess will have a sort order that is compatible with update operations.

Camel / MongoDB - $in operator with reference to another collection/document array

I came across this blog post in looking for a way to organize relationships. What I'm getting confused on is the syntax behind the following statement. I realize by virtue of the javascript variables, the following is possible..
var party = {
_id: "chessparty",
name: "Chess Party!",
attendees: ["seanhess", "bob"]
}
var user = { _id: "seanhess", name: "Sean Hess", events: ["chessparty"]}
db.events.save(party)
db.users.save(user)
db.events.find({_id: {$in: user.events}}) // events for user
db.users.find({_id: {$in: party.attendees}}) // users for event
What is throwing me for a spin in the last two lines though, since what I'm trying to do is something like this in Java. So I understand the idea, but I want to accomplish this in Java, more specifically, the Camel/MongoDB component.
I've been referencing the following documentation and looking at the "findAll" operation. So would I need to first run a query to get the array, for example the "user.events" and then run a second query to find the list of events? Or is there a way to reference the field "events" in collection "db.user" as part of the query on "db.events"?
Something to the tune of the following with a single query..
pseudo idea: db.events.find({_id: {$in: [db.user.events]}})
Ultimately I'm looking to translate this into something like the following..
from("direct:findAll")
.setBody().constant("{ \"_id\": {$in :\"user.events\" }}")
.to("mongodb:myDb?database=sample&collection=events&operation=findAll")
.to("mock:resultFindAll");
I'm a bit new to the mongodb camel component, so I'm wondering if there are any gurus that have already been there done that sort of thing?? And have any advice on the subject. Or to find out without 2 days of trial and error that this simple isn't possible..?
Thanks!
I thought I'd wrap this question up, it has been some time now and a few weeks ago I was able to work past this.
Basically I would up storing an array of userId's in the events collection..
example:
{
_id : 22bjh2345j2k3v235,
eventName : "something",
eventDate : ISODate(...),
attendees : [
"abc123",
"def098",
"etc..."
]
}
essentially assigning users to events. This way I could find all events a user was participating in, and I wound up with a list of users per event.
if I wanted to find all events for a user:
from("direct:findAll")
.setBody().simple("{ \"attendees\": \"${header.userId}\" }")
.to("mongodb:myDb?database=sample&collection=events&operation=findAll")
.to("mock:resultFindAll");

Google BigQuery - query ran successfully but results not pushed to destination table

We run a nightly query against BigQuery via the Java REST API that specifies a destination table for the results to be pushed to (write disposition=WRITE_TRUNCATE). Today's query appeared to run without errors but the results were not pushed to the destination table.
This query has been running for a few weeks now and we've had no issues. No code changes were made either.
Manually running it a second time after it "failed" worked fine. It was just this one glitch that we spotted and we're concerned it may happen again.
Our logged JSON response from the "failed" query looks fine (I've obfuscated any sensitive data):
INFO: Job finished successfully: {
"configuration" : {
"dryRun" : false,
"query" : {
"createDisposition" : "CREATE_IF_NEEDED",
"destinationTable" : {
"datasetId" : "[REMOVED]",
"projectId" : "[REMOVED]",
"tableId" : "[REMOVED]"
},
"priority" : "INTERACTIVE",
"query" : "[REMOVED]",
"writeDisposition" : "WRITE_TRUNCATE"
}
},
"etag" : "[REMOVED]",
"id" : "[REMOVED]",
"jobReference" : {
"jobId" : "[REMOVED]",
"projectId" : "[REMOVED]"
},
"kind" : "bigquery#job",
"selfLink" : "[REMOVED]",
"statistics" : {
"creationTime" : "1390435780070",
"endTime" : "1390435780769",
"query" : {
"cacheHit" : false,
"totalBytesProcessed" : "12546"
},
"startTime" : "1390435780245",
"totalBytesProcessed" : "12546"
},
"status" : {
"state" : "DONE"
}
}
Using the "try it!" for Jobs/GET here and plugging in the job id also shows the job was indeed successful and matches our logged output (pasted above).
Checking the web console shows the destination table has been truncated but not updated. Weirdly, the "Last Modified" has not been updated (I did try refreshing the page numerous times):
http://i.stack.imgur.com/384NL.png
Has anyone experienced this before with BigQuery - a query appearing to run successfully but if a destination/reference table was specified the results were not pushed yet the table was truncated?
I am a developer on the BigQuery team. I've looked up the details of you job from the breadcrumbs you left (your query was the only one that started at that start time).
It looks like your destination table was truncated at 4:09 pm today PST, which is the time your job ran, but it was left empty -- the query that truncated it didn't actually fill in any information.
I'm having a little bit of trouble piecing together the details, because one of the source tables appears to have been overwritten (the left table in your left outer of join was created at 4:20 PM).
However, there is a clue in the "total bytes processed" field -- it says that the query only processed 12K of data. The internal statistics say that only 384 rows were involved in the query among both tables that were involved.
My guess is that the query legitimately returned 0 rows, so the table was cleared.
There is a bug in that deleting all of the data in a table doesn't update the last modified time. We use last modified to mean either ast time the metadata was updated (like description, schema, etc) or the last time the table had data added to it). But if you just truncate the table, that doesn't update the metadata or add data, so we end up with a stale last-modified time.
If this doesn't sound like a reasonable chain of events, we'll need more information from you about how to debug it (especially since it looks like the tables involved have been modified since you ran this query), and a way that we can reproduce it would be great.
So, we figured out what the problem is with this. It failed again a few times over the last few days so we dug in further.
The query that is being executed is dependant on a another query which is executed immediately before it. Although we do wait for the first query to finish (job status = "DONE"), it appears that behind the scenes it's actually not fully complete and it's data is not yet available to be used.
Current process is:
Fetch data from another data source and stream the results into table A
When (1) is complete (poll job id and get status "DONE") submit another query which uses the results in table A to join on to create table B
Table A's data is not yet available so query from (2) results in an empty table
We've noticed it takes about 5-10 seconds for the data to actually appear and be available in BigQuery when using streaming for the first query.
We used a fairly ugly workaround - simply wait a few seconds after the first query before running the next one. Not exactly elegant but it works.

Categories

Resources