We are using IBM ContentManager v8. I am looking for the correct syntax to find all documents uploaded to CM, after a specific date. I would like to use the document or object creation time stamp, which is an internal attribute of every document uploaded to CM.
Our backend database for CM is DB2.
From the example, I see that we can use
/<item_type> [ICMCHECKEDOUT/#ICMCHKOUTTS > "2013-11-20-12.00.00.000000"]
for example, to find all documents that were checked out after the specified date.
Is there an internal attribute for the time on which the object was added to CM.? If yes, what is the syntax.?
FYI:- DKLobICM.getCreatedTimestamp() method returns the exact time the object was added to CM. But I need a way to use this timestamp in my query.
Rgds,
Raj.
For the record:
/<item_type>[#SYSROOTATTRS.CREATETS>"2013-11-20-12.00.00.000000"]
will give you the expected results.
Try This
/<item_type>[#AttributeRef="xxxxx" AND #CREATETS BETWEEN "2015-06-01-00.00.00.000" AND "2015-12-31-00.00.00.0000"]
Related
I have a field which contains forward slashes. I'm trying to execute this Query:
QueryBuildres.termQuery("id", QueryParser.escape("/my/field/val"))
and I cannot get any results. When I'm looking for 'val' only, then I get the proper results. Any ideas why is that happening? Of course without escaping it also doesn't return the results.
UPDATE
so QP.escape parses string properly, but when request goes to elasticsearch it's double escaped
[2015-07-10 01:53:00,063][WARN ][index.search.slowlog.query] [Aaa AA] [index_name][4] took[420.8micros], took_millis[0], types[page], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{"query":{"term":{"pageId":"\\/path\\/and\\/testestest"}}}], extra_source[],
UPDATE 2: It works when I'm using querystring, but I wouldn't like to user that and type everything by hand.
You might have to use _id instead of Id
So the reason why I didn't get any results is the default index which I had created.
I didn't specified mapping for my field, so ElasticSearch didn't treated my field.
In ElasticSearch documentation I read, that during the analysis process, elastic search splits the string into words, lower-case them and do some other stuff.
In my case my "/path/in/my/field" was splitted into four fields:
path
in
my
field
So when I was searching for "pageId:/path/in/my/field" I didn't get any results because pageId in fact didn't contained it.
To solve the issue I had to add proper mapping to pageId field, which didn't do any preprocessing (instead of four words, now I have one "/path/in/my/field")
Links to docs:
https://www.elastic.co/guide/en/elasticsearch/guide/current/analysis-intro.html
https://www.elastic.co/guide/en/elasticsearch/guide/current/mapping-intro.html
I have been trying to retrieve information from querying a specific Asset(Story/Defect) on V1 using the VersionOne.SDK.Java.APIClient. I have been able to retrieve information like ID.Number, Status.Name but not Requests.Custom_SFDCChangeReqID2 under a Story or a Defect.
I check the metadata for:
https://.../Story?xsl=api.xsl
https://.../meta.V1/Defect?xsl=api.xsl
https://.../meta.V1/Request?xsl=api.xsl
And the naming and information looks right.
Here is my code:
IAssetType type = metaModel.getAssetType("Story");
IAttributeDefinition requestCRIDAttribute = type.getAttributeDefinition("Requests.Custom_SFDCChangeReqID2");
IAttributeDefinition idNumberAttribute = type.getAttributeDefinition("ID.Number")
Query query = new Query(type);
query.getSelection().add(requestCRIDAttribute);
query.getSelection().add(idNumberAttribute);
Asset[] results = v1Api.retrieve(query).getAssets();
String RequestCRID= result.getAttribute(requestCRIDAttribute).getValue().toString();
String IdNumber= result.getAttribute(idNumberAttribute).getValue().toString();
At this point, I can get some values for ID.Number but I am not able to retrieving any information for the value Custom_SFDCChangeReqID2.
When I run the restful query to retrieve information using a browser from a server standpoint it works and it does retrieve the information I am looking for. I used this syntax:
https://.../rest-1.v1/Data/Story?sel=Number,ID,Story.Requests.Custom_SFDCChangeReqID2,Story.
Alex: Remember that Results is an array of Asset´s, so I guess you should be accessing the information using something like
String RequestCRID= results[0].getAttribute(requestCRIDAttribute).getValue().toString();
String IdNumber= results[0].getAttribute(idNumberAttribute).getValue().toString();
or Iterate through the array.
Also notice that you have defined:
Asset[] results and not result
Hi thanks for your answer! I completely forgot about representing the loop, I was too focus on the retriving information part, yes I was actually using a loop and yes I created a temporary variable to check what I was getting from the query in the form
Because I was getting the variables one by one so I was only using the first record. My code works after all. It was just that What I was querying didn't contain any information of my use, that's why I was not finding any. Anyway thanks for your comment and observations
this is a newo4j rest api call related error - from my java code I'm making a REST API call to a remote Neo4J Database by passing query and parameters, the query being executed is as below
*MERGE (s:Sequence {name:'CommentSequence'}) ON CREATE SET s.current = 1 ON MATCH SET s.current=s.current+1 WITH s.current as sequenceCounter MERGE (cmnt01:Comment {text: {text}, datetime:{datetime}, type:{type}}) SET cmnt01.id = sequenceCounter WITH cmnt01 MATCH (g:Game {game_id:{gameid}}),(b:Block {block_id:{bid}, game_id:{gameid}}),(u:User {email_id:{emailid}}) MERGE (b)-[:COMMENT]->(cmnt01)<-[:COMMENT]-(u)*
Basically this query is generating a sequence number at run time and sets the 'CommentId' property of the Comment Node as this Sequence number before attaching the comment node to a Game's block i.e. For every comment added by the user I'm adding a sequence number as it's id.
This is working for almost 90% of the cases but there are couple of cases in a day when it fails with below error
ERROR com.exectestret.dao.BaseGraphDAO - Query execution error:**Error reading as JSON ''**
Why does the Neo4J Query not return any proper error code ? It just says error reading as JSON ''.
Neo4J Version is
Neo4j Community Edition 2.2.1
Thanks,
Deepesh
It gets HTML back and can't read it as JSON, but should output the failure HTML can you check the log output for that and share it too?
Also check your graph.db/messages.log and data/log/console.log for any error messages.
I'd like iterate over every document in a (probably big) Lotus Domino database and be able to continue it from the last one if the processing breaks (network connection error, application restart etc.). I don't have write access to the database.
I'm looking for a way where I don't have to download those documents from the server which were already processed. So, I have to pass some starting information to the server which document should be the first in the (possibly restarted) processing.
I've checked the AllDocuments property and the DocumentColletion.getNthDocument method but this property is unsorted so I guess the order can change between two calls.
Another idea was using a formula query but it does not seem that ordering is possible with these queries.
The third idea was the Database.getModifiedDocuments method with a corresponding Document.getLastModified one. It seemed good but
it looks to me that the ordering of the returned collection is not documented and based on creation time instead of last modification time.
Here is a sample code based on the official example:
System.out.println("startDate: " + startDate);
final DocumentCollection documentCollection =
database.getModifiedDocuments(startDate, Database.DBMOD_DOC_DATA);
Document doc = documentCollection.getFirstDocument();
while (doc != null) {
System.out.println("#lastmod: " + doc.getLastModified() +
" #created: " + doc.getCreated());
doc = documentCollection.getNextDocument(doc);
}
It prints the following:
startDate: 2012.07.03 08:51:11 CEDT
#lastmod: 2012.07.03 08:51:11 CEDT #created: 2012.02.23 10:35:31 CET
#lastmod: 2012.08.03 12:20:33 CEDT #created: 2012.06.01 16:26:35 CEDT
#lastmod: 2012.07.03 09:20:53 CEDT #created: 2012.07.03 09:20:03 CEDT
#lastmod: 2012.07.21 23:17:35 CEDT #created: 2012.07.03 09:24:44 CEDT
#lastmod: 2012.07.03 10:10:53 CEDT #created: 2012.07.03 10:10:41 CEDT
#lastmod: 2012.07.23 16:26:22 CEDT #created: 2012.07.23 16:26:22 CEDT
(I don't use any AgentContext here to access the database. The database object comes from a session.getDatabase(null, databaseName) call.)
Is there any way to reliably do this with the Lotus Domino Java API?
If you have access to change the database, or could ask someone to do so, then you should create a view that is sorted on a unique key, or modified date, and then just store the "pointer" to the last document processed.
Barring that, you'll have to maintain a list of previously processed documents yourself. In that case you can use the AllDocuments property and just iterate through them. Use the GetFirstDocument and GetNextDocument as they are reportedly faster than GetNthDocument.
Alternatively you could make two passes, one to gather a list of UNIDs for all documents, which you'll store, and then make a second pass to process each document from the list of UNIDs you have (using GetDocumentByUNID method).
I don't use the Java API, but in Lotusscript, I would do something like this:
Locate a view displaying all documents in the database. If you want the agent to be really fast, create a new view. The first column should be sorted and could contain the Universal ID of the document. The other columns contains all the values you want to read in your agent, in your example that would be the created date and last modified date.
Your code could then simply loop through the view like this:
lastSuccessful = FunctionToReadValuesSomewhere() ' Returns 0 if empty
Set view = thisdb.GetView("MyLookupView")
Set col = view.AllEntries
Set entry = col.GetFirstEntry
cnt = 0
Do Until entry is Nothing
cnt = cnt + 1
If cnt > lastSuccessful Then
universalID = entry.ColumnValues(0)
createDate = entry.ColumnValues(1)
lastmodifiedDate = entry.ColumnValues(2)
Call YourFunctionToDoStuff(universalID, createDate, lastmodifiedDate)
Call FunctionToStoreValuesSomeWhere(cnt, universalID)
End If
Set entry = col.GetFirstEntry
Loop
Call FunctionToClearValuesSomeWhere()
Simply store the last successful value and Universal ID in say a text file or environment variable or even profile document in the database.
When you restart the agent, have some code that check if the values are blank (then return 0), otherwise return the last successful value.
Agents already keep a field to describe documents that they have not yet processed, and these are automatically updated via normal processing.
A better way of doing what you're attempting to do might be to store the results of a search in a profile document. However, if you're trying to relate to documents in a database you do not have write permission to, the only thing you can do is keep a list of the doclinks you've already processed (and any information you need to keep about those documents), or a sister database holding one document for each doclink plus multiple fields related to the processing you've done on them. Then, transfer the lists of IDs and perform the matching on the client to do per-document lookups.
Lotus Notes/Domino databases are designed to be distributed across clients and servers in a replicated environment. In the general case, you do not have a guarantee that starting at a given creation or mod time will bring you consistent results.
If you are 100% certain that no replicas of your target database are ever made, then you can use getModifiedDocuments and then write a sort routine to place (modDateTime,UNID) pairs into a SortedSet or other suitable data structure. Then you can process through the Set, and if you run into an error you can save the modDateTime of the element that you were attempting to process as your restart point. There may be a few additional details for you to work out to avoid duplicates, however, if there are multiple documents with the exact same modDateTime stamp.
I want to make one final remark. I understand that you are asking about Java, but if you are working on a backup or archiving system for compliance purposes, the Lotus C API has special functions that you really should look at.
I have an action in struts2 that will query the database for an object and then copy it with a few changes. Then, it needs to retrieve the new objectID from the copy and create a file called objectID.txt.
Here is relevant the code:
Action Class:
ObjectVO objectVOcopy = objectService.searchObjects(objectId);
//Set the ID to 0 so a new row is added, instead of the current one being updated
objectVOcopy.setObjectId(0);
Date today = new Date();
Timestamp currentTime = new Timestamp(today.getTime());
objectVOcopy.setTimeStamp(currentTime);
//Add copy to database
objectService.addObject(objectVOcopy);
//Get the copy object's ID from the database
int newObjectId = objectService.findObjectId(currentTime);
File inboxFile = new File(parentDirectory.getParent()+"\\folder1\\folder2\\"+newObjectId+".txt");
ObjectDAO
//Retrieve identifying ID of copy object from database
List<ObjectVO> object = getHibernateTemplate().find("from ObjectVO where timeStamp = ?", currentTime);
return object.get(0).getObjectId();
The problem is that more often than not, the ObjectDAO search method will not return anything. When debugging I've noticed that the Timestamp currentTime passed to it is usually about 1-2ms off the value in the database. I have worked around this bug changing the hibernate query to search for objects with a timestamp within 3ms of the one passed, but I'm not sure where this discrepancy is coming from. I'm not recalculating the currentTime; I'm using the same one to retrieve from the database as I am to write to the database. I'm also worried that when I deploy this to another server the discrepancy might be greater. Other than the objectID, this is the only unique identifier so I need to use it to get the copy object.
Does anyone know why this is occuring and is there a better work around than just searching through a range? I'm using Microsoft SQL Server 2008 R2 btw.
Thanks.
Precision in SQL Server's DATETIME data type does not precisely match what you can generate in other languages. SQL Server rounds to the nearest 0.003 - this is why you can say:
DECLARE #d DATETIME = '20120821 23:59:59.997';
SELECT #d;
Result:
2012-08-21 23:59:59.997
Then try:
DECLARE #d DATETIME = '20120821 23:59:59.999';
SELECT #d;
Result:
2012-08-22 00:00:00.000
Since you are using SQL Server 2008 R2, you should make sure to use the DATETIME2 data type instead of DATETIME.
That said, #RedFilter makes a good point - why are you relying on the time stamp when you can use the generated ID instead?
This feels wrong.
Other than the objectID, this is the only unique identifier
Databases have the concept of a unique identifier for a reason. You should really use that to retrieve an instance of your object.
You can use the get method on the Hibernate session and take advantage of the session and second level caches as well.
With your approach you execute a query everytime you retrieve your object.