I'm performing a test with CouchBase 4.0 and java sdk 2.2. I'm inserting 10 documents whose keys always start by "190".
After inserting these 10 documents I query them with:
cb.restore("190", cache);
Thread.sleep(100);
cb.restore("190", cache);
The query within the 'restore' method is:
Statement st = Select.select("meta(c).id, c.*").from(this.bucketName + " c").where(Expression.x("meta(c).id").like(Expression.s(callId + "_%")));
N1qlQueryResult result = bucket.query(st);
The first call to restore returns 0 documents:
Query 'SELECT meta(c).id, c.* FROM cache c WHERE meta(c).id LIKE "190_%"' --> Size = 0
The second call (100ms later) returns the 10 documents:
Query 'SELECT meta(c).id, c.* FROM cache c WHERE meta(c).id LIKE "190_%"' --> Size = 10
I tried adding PersistTo.MASTER in the 'insert' statement, but it neither works.
It seems that the 'insert' is not persisted immediately.
Any help would be really appreciated.
Joan.
You're using N1QL to query the data - and N1QL is only eventually consistent (by default), so it only shows up after the indices are recalculated. This isn't related to whether or not the data is persisted (meaning: written from RAM to disc).
You can try to change the scan_consitency level from its default - NOT_BOUNDED - to get consistent results, but that would take longer to return.
read more here
java scan_consitency options
Related
Hey I'm currently writing a Java application that accesses NEO4J via Spring Neo4J Driver.
I have a couple of nodes with arrays. No I'm trying to write an cypher Query that deletes an Element from an array of a matched node. If the element was the last one i would love to delete the complete node. To achive this I'm using apoc.do.when. You can find a simplified version of my query below.
MATCH (n:NODE) WHERE "Peter" IN n.NAMES
CALL apoc.do.when(size(n.NAMES) > 1, 'SET n.NAMES = [x IN n.NAMES WHERE x <> "Peter"]', 'DETACH DELETE n') YIELD value
RETURN value
My query is overall working fine, but I don't get the Result summary back anymore in my Java application.
I'm calling the query the following way:
ResultSummary output = driver.session().run(query.withParameters(params)).consume();
I know that the query is executed and deleting a node. I validated that by Neo4J browser, but the result summary says:
serverInfo=InternalServerInfo{address='localhost:7687', version='Neo4j/3.5.17'}, databaseInfo=InternalDatabaseInfo{name='null'}, queryType=READ_WRITE, counters=null, plan=null, profile=null, notifications=[], resultAvailableAfter=143, resultConsumedAfter=1}
Updates: 0
Delete: 0
Therefore i can not validate from my Javacode if the operation was successful. I assume that apoc.do.when does not promote the result summary from the query correctly. Is there anyway to fix this or do i need to validate this with a second query?
You can modify the Cypher query to return a result that indicates whether an update or deletion occurred. For example:
MATCH (n:NODE) WHERE "Peter" IN n.NAMES
CALL apoc.do.when(
size(n.NAMES) > 1,
'SET n.NAMES = [x IN n.NAMES WHERE x <> "Peter"] RETURN "updated" AS res',
'DETACH DELETE n RETURN "deleted" AS res') YIELD value
RETURN value.res AS res
Then your Java client can iterate through (or stream) the resulting records and count the number of updates and deletions.
I'm currently facing very slow/ no response on a collection looking by ID. I have ~ 2 milion of documents in a partitioned collection. If lookup the document using the partitionKey and id the response is immediate
SELECT * FROM c WHERE c.partitionKey=123 AND c.id="20566-2"
if I try using only the id
SELECT * FROM c WHERE c.id="20566-2"
the response never returns, java client seems freezed and I have the same situation using the Data Explorer from Azure Portal. I tried also looking up by another field that isn't the id or the partitionKey and the response always returns. When I try the select from Java client I always set the flag to enable cross partition query.
The next thing to try is to avoid the character "-" in the ID to test if this character blocks the query (anyway I didn't find anything on the documentation)
The issue is related to your Java code. Due to Azure DocumentDB Java SDK wrapped the DocumentDB REST APIs, according to the reference of REST API Query Documents, as #DanCiborowski-MSFT said, the header x-ms-documentdb-query-enablecrosspartition explains your issue reason as below.
Header: x-ms-documentdb-query-enablecrosspartition
Required/Type: Optional/Boolean
Description: If the collection is partitioned, this must be set to True to allow execution across multiple partitions. Queries that filter against a single partition key, or against single-partitioned collections do not need to set the header.
So you need to set True to enable cross partition for querying across multiple partitions without a partitionKey in where clause via pass a instance of class FeedOption to the method queryDocuments, as below.
FeedOptions queryOptions = new FeedOptions();
queryOptions.setEnableCrossPartitionQuery(true); // Enable query across multiple partitions
String collectionLink = collection.getSelfLink();
FeedResponse<Document> queryResults = documentClient.queryDocuments(
collectionLink,
"SELECT * FROM c WHERE c.id='20566-2'", queryOptions);
To implement a pagination on a list, I need to do two queries:
Get elements count from selected table using SELECT COUNT(*)...
Get subset of list using LIMIT and OFFSET in a query.
Are there any way to avoid this?. Are There any metadata where this is stored?
The function resultSet.getRow() retrive the array index of list, then I need to make a query whose results are all rows. After I get a subSet but this is expensive so.
I want send a only query with limits and offsets and retrive the selected datas and total count of datas.
is this possible?
Thanks in advance,
Juan
I saw some things about this, then new doubts are coming on to me.
When a query is lanched with limits, we can add SQL_CALC_FOUND_ROWS * on "select" section as follow:
"SELECT SQL_CALC_FOUND_ROWS * FROM ... LIMIT 0,10"
After, I query the follow:
"SELECT FOUND_ROWS()"
I understand that first query store count in a internal var whose value will be returned on second query. The second query isn't a "select count(*) ..." query so "SELECT FOUND_ROWS()" query should be inexpensive.
Am I right?
Some test that I have made show the follow:
--fist select count(*), second select with limits--
Test 1: 194 ms
out: {"total":94607,"list":["2 - 1397199600000","2 - 1397286000000","13 - 1398150000000","13 - 1398236400000","13 - 1398322800000","13 - 1398409200000","13 - 1398495600000","14 - 1398150000000","14 - 1398236400000","14 - 1398322800000"]}
--the new way--
Test 2: 555 ms
out: {"total":94607,"list":["2 - 1397199600000","2 - 1397286000000","13 - 1398150000000","13 - 1398236400000","13 - 1398322800000","13 - 1398409200000","13 - 1398495600000","14 - 1398150000000","14 - 1398236400000","14 - 1398322800000"]}
Why the test dont show the expected result?
My assumptions are wrong?
thanks, regards
I have resolve the question
The next link has got the response.
https://www.percona.com/blog/2007/08/28/to-sql_calc_found_rows-or-not-to-sql_calc_found_rows/
I have a table with 62,000,000 rows aprox, a need select data from these a export to .txt or .csv
My query limit the result to 60,000 rows aprox.
When I run my the query in my developer machine, I eat all memory and get a java.lang.OutOfMemoryError
In this moment I use Hibernate for DAO, but I can change to pure JDBC solution when you recommend
My pseoudo-code is
List<Map> list = myDao.getMyData(Params param); //program crash here
initFile();
for(Map map : list){
util.append(map); //this transform row to file
}
closeFile();
Suggesting me to write my file?
Note: I use .setResultTransformer(Transformers.ALIAS_TO_ENTITY_MAP); to get Map instead of any Entity
You could use hibernate's ScrollableResults. See documentation here: http://docs.jboss.org/hibernate/orm/4.3/manual/en-US/html/ch11.html#objectstate-querying-executing-scrolling
This uses server-side cursors, if your database engine / database driver supports this. Be sure for this to work you set the following properties:
query.setReadOnly(true);
query.setCacheable(false);
ScrollableResults results = query.scroll(ScrollMode.FORWARD_ONLY);
while (results.next()) {
SomeEntity entity = results.get()[0];
}
results.close();
lock the table and then perform subset selection and exports, appending to the results file. ensure you unconditionally unlock when done.
Not nice, but the task will perform to completion even on limited resource servers or clients.
I am trying to implement paging in hibernate and i am seeing some weird behavior from hibernate. I have tried two queries with the same result
List<SomeData> dataList = (List<SomeData>) session.getCurrentSession()
.createQuery("from SomeData ad where ad.bar = :bar order by ad.id.name")
.setString("bar", foo)
.setFirstResult(i*PAGE_SIZE)
.setMaxResults(PAGE_SIZE)
.setFetchSize(PAGE_SIZE) // page_size is 1000 in my case
.list();
and
List<SomeData> datalist= (List<SomeData>) session.getCurrentSession()
.createCriteria(SomeData.class)
.addOrder(Order.asc("id.name"))
.add(Expression.eq("bar", foo))
.setFirstResult(i*PAGE_SIZE)
.setMaxResults(PAGE_SIZE)
.list();
I have this in a for loop and each time this query runs, the run time increases. The first call returns in 100 ms, the second in 150 and the 5th call takes 2 seconds and so on.
Looking in the server (MySql 5.1.36) logs, I see that the select query does get generated properly with the LIMIT clause but for each record that is returned, hibernate for some reason also emits an update query. after the first result, it updates 1000 records, after the second result, it updates 2000 records and so on. So for a page size of 1000 and 5 iterations of the loop, the database is getting hit with 15,000 queries (5K + 4K + 3K + 2K + 1K ) Why is that happening?
I tried making a native SQL query and it worked as expected. The query is
List asins = (List) session.getCurrentSession()
.createSQLQuery("SELECT * FROM some_data where foo = :foo order by bar
LIMIT :from , :page")
.addScalar(..)
.setInteger("page", PAGE_SIZE)
.setInteger("from", (i*PAGE_SIZE))
... // set other params
.list();
My mapping class has setters/getters for the blob object as
void setSomeBlob(Blob blob){
this.someByteArray = this.toByteArray(blob)
}
void Blob getSomeBlob(){
return Hibernate.createBlob(someByteArray)
}
Turn on bound parameters logging (you can do that by setting "org.hibernate.type" log level to "TRACE") to see what specifically is being updated.
Most likely you're modifying the entities after they've been loaded - either explicitly or implicitly (e.g. returning different value from getter or using a default value somewhere).
Another possibility is that you've recently altered (one of) the table(s) you're selecting from and column default in the table doesn't match default value in the entity.