I am doing a java application that has to download only scheduled reports from a Business Object Server. For scheduling the reports I am using Info View the following way
1) Clic on the report
2) Action --> Schedule
3) Set Recurrence, Format and Destinations
The report then has a number of instances, as opposed to not scheduled reports, which have zero instances.
In the code, for separate the scheduled reports I am using
com.crystaldecisions.sdk.occa.infostore.ISchedulingInfo
IInfoObject ifo = ((IInfoObject) result.get( i ))
ISchedulingInfo sche = ifo.getSchedulingInfo();
this should give info about scheduling right? but for some reason this is returning an object(not a null, how I suppose it should return) for not scheduled reports.
And the info returned by its methods (say getBeginDate, getEndDate, etc) are similar for both kinds.
I tried to filter the reports using SI_CHILDREN > 0 the query
SELECT * FROM CI_INFOOBJECTS WHERE SI_PROGID = 'CrystalEnterprise.Webi' "
+ AND SI_CHILDREN > 0 AND SI_PARENTID = " + String.valueOf( privateFolderId )
+ " ORDER BY SI_NAME ASC "
is this a right way to filter the scheduled reports?
So Webi, Crystal etc. implement the ISchedulable interface. This means that your non-instance InfoObject WILL return an ISchedulingInfo, regardless of whether or not it has been scheduled.
If an object is scheduled, an instance is created with SI_SCHEDULE_STATUS = 9 (ISchedulingInfo.ScheduleStatus.PENDING)
The job then runs (SI_SCHEDULE_STATUS = 0), and either completes (SI_SCHEDULE_STATUS=1) or fails (SI_SCHEDULE_STATUS = 3). It can also be paused (SI_SCHEDULE_STATUS = 8)
So to find all instances that are scheduled, you need a query like:
select * from ci_infoObjects where si_instance=1 and si_schedule_status not in (1,3)
This will get you anything that isn't a success or a failure
A scheduled report will have a child instance which holds the scheduling information and has the scheduled report as its parent. (You can see this instance in the history list in BI Launch Pad.)
You can retrieve recurrently scheduled child instances from the CMS like this:
SELECT * FROM CI_INFOOBJECTS WHERE SI_PROGID = 'CrystalEnterprise.Webi'
and si_recurring = 1
This will isolate any the reports which are scheduled to be executed (or to be more precise, the child "scheduling" instances described above). You can then call getSchedulingInfo() on the child instance to get further info about this scheduling.
Bear in mind the the SI_PARENTID field, not the SI_ID field, returned by the above query gives you the ID of the initial WebI report.
Related
My scenario is like this.
Solr Indexing happens for a product and then product approval status is made unapproved from backoffice. After then, when you search the related words that is placed in description of the product or directly product code from website, you get a server error since the product that is made unapproved is still placed in solr.
If you perform any type of indexing from backoffice manually, it works again. But it is not a good solution since there might be lots of products whose status is changed or that is not a solution which happens instantly. If you use cronjob for indexing, that is not a fast solution again.You get server error until cronjob starts to work.
I would like to update solr index instantly for the attributes which changes frequently like price, status, etc. For instance, when an attribute changes, Is it a good way to start partial index immediately in java code? If it is, how? (by IndexerService?). For another solution, Is it a better idea to make http request to solr for the attribute?
In summary, I am looking for the best solution to perform partial index.
Any ideas?
For this case you need to write two new important SOLR-Configuration parts:
1) A new SOLR-Cronjob that trigger the indexing
2) A new SOLR-IndexerQuery for indexing with your special requirements.
When you have a look at the default stuff from hybris you see:
INSERT_UPDATE CronJob;code[unique=true];job(code);singleExecutable;sessionLanguage(isocode);active;
;backofficeSolrIndexerUpdateCronJob;backofficeSolrIndexerUpdateJob;false;en;false;
INSERT Trigger;cronJob(code);active;activationTime;year;month;day;hour;minute;second;relative;weekInterval;daysOfWeek;
;backofficeSolrIndexerUpdateCronJob;true;;-1;-1;-1;-1;-1;05;false;0;;
This part above is to configure when the job should run. You can modify him, that he should run ever 5 seconds for example.
INSERT_UPDATE SolrIndexerQuery; solrIndexedType(identifier)[unique = true]; identifier[unique = true]; type(code); injectCurrentDate[default = true]; injectCurrentTime[default = true]; injectLastIndexTime[default = true]; query; user(uid)
; $solrIndexedType ; $solrIndexedType-updateQuery ; update ; false ; false ; false ; "SELECT DISTINCT {PK} FROM {Product AS p JOIN VariantProduct AS vp ON {p.PK}={vp.baseProduct} } WHERE {p.modifiedtime} >= ?lastStartTimeWithSuccess OR {vp.modifiedtime} >= ?lastStartTimeWithSuccess" ; admin
The second part here is the more important. Here you define which products should be indexed. Here you can see that the UPDATE-Job is looking for every Product that was modified. Here you could write a new FlexibleSearch with your special requirements.
tl;tr Answear: You have to write a new performant solrIndexerQuery that could be trigger every 5 seconds
I have a task that simply creates an entity into the datastore. I now queue up many tasks into a named push queue and let it run. When it completes, I see in the log that all of the task request were run. However, the number of entities created was actually lower than expected.
The following is an example of the code I used to test this. I ran 10000 tasks and the final result only has around 9200 entities in the datastore.
I use RestEasy to expose urls for the task queues.
queue.xml
<queue>
<name>testQueue</name>
<rate>5/s</rate>
</queue>
Test Code
#GET
#Path("/queuetest/{numTimes}")
public void queueTest(#PathParam("numTimes") int numTimes) {
for(int i = 1; i <= numTimes; i++) {
Queue queue = QueueFactory.getQueue("testQueue");
TaskOptions taskOptions = TaskOptions.Builder.withUrl("/queuetest/worker/" + i).method(Method.GET);
queue.add(taskOptions);
}
}
#GET
#Path("/queuetest/worker/{index}")
public void queueTestWorker(#PathParam("index") String index) {
DateFormat df = new SimpleDateFormat("MM/dd/yyyy HH:mm:ss");
Date today = Calendar.getInstance().getTime();
String timestamp = df.format(today);
Entity tObj = new Entity("TestObj");
tObj.setProperty("identifier", index);
tObj.setProperty("timestamp", timestamp);
DatastoreService datastore = DatastoreServiceFactory.getDatastoreService();
Key key = datastore.put(tObj);
}
I have ran this a few times and not once have I seen all of the entities created.
Is it possible that tasks can be discarded if there is too much contention on the queue?
Is this the expected behavior for a task queue?
#### EDIT
I followed mitch's suggestion to log the entity IDs that are created and found that they are indeed created as expected. But the logs themselves displayed some weird behavior in which logs from some tasks appear in another task's log. And when that happens, some tasks show 2 entity IDs in a single request.
For the tasks that display 2 entity IDs, the first one it logs are the missing entities in the datastore. Does this mean there is a problem with a high number of puts to the datastore? (The entities i'm creating are NOT part of a larger entity group, i.e. It doesn't refer to #parent)
Why don't you add a log statement after each datastore.put() call which logs the ID of the newly created entity. Then you can compare the log to the datastore contents and you will be able to tell if the problem is that datastore.put() is not being invoked successfully 1000 times or if the problem is that some of the successful put calls are not resulting in entities that you see in the datastore.
Here I have the thread pool and an Another Polling class for implementing polling and Reading the messages from the database. Now the problem is I have to avoid reading redundant messages for updating and process the other messages waiting at the same time, since there are vast messages waiting.
// the code for poll method
public void poll() throws Exception {
// Method which defines polling of the data entry for counting its size.
st = conn.createStatement();
int count = 1;
long waitInMillisec = 1 * 60 * 125; // Wait for 7.5 seconds.
for (int i = 0; i < count; i++) {
System.out.println("Wait for " + waitInMillisec + " millisec");
Thread.sleep(waitInMillisec);
java.util.Date date = new java.util.Date();
Timestamp start = new Timestamp(date.getTime());
rs = st.executeQuery("select * from msg_new_to_bde where ACTION=804");
java.util.Date date1 = new java.util.Date();
Timestamp end = new Timestamp(date1.getTime());
System.out.print("Query count: ");
System.out.println(end.getTime() - start.getTime());
Collection<KpiMessage> pojoCol = new ArrayList<KpiMessage>();
while (rs.next()) {
KpiMessage filedClass = convertRecordsetToPojo(rs);
pojoCol.add(filedClass);
}
I don't know if you have a choice on how your messages are stored, but they appear to be inserted into a table that you're polling. You might add a database trigger to this table that in turn pushes a message into an Oracle AQ with the same data plus a correlation id.
If you can do without the table, I would suggest just defining the Oracle AQ in the same schema to store the messages, and dequeue by partial correlation id using pattern matching like corrid="804%". The full correlation id for the AQ message might be "804" + the unique pk of the message. You could then reuse this same queue for multiple actions for example, and define a Java queue 804 action worker class to wait on messages of that particular action (804 correlation id prefix on the AQ messages).
The documentation is pretty good at Oracle for AQ, and the package you would use to create the queue is dbms_aqadm. The package you would use to enqueue/dequeue is dbms_aq. There's a few priv's/grants you'll need to get too before the aq can be created and the dbms_aq packages can be used. The dbms_aq should easily be callable from Java.
Go to docs.oracle.com to lookup the details on dbms_aqadm and dbms_aq packages. Once you create the AQ (which will create an AQ table that backs the queue), I would suggest you add an index to the AQ table on corrid for performance.
If you can't avoid the current table architecture you have in place or don't want to get into AQ technology, the other option you could use is to create a lock in Oracle (dbms_lock package) and call that in your polling class to obtain the lock or block/wait. That way you synchronize all your polling classes to avoid multiple threads from picking up the same message. So the first thing the polling class would do is to try to obtain the lock, if successful it pulls a message out of the table, processes it, updates it as processed, releases the lock. The dbms_lock package can block/wait for the lock or return immediately, but based on the operations success/failure you can take further action. But it will help you control the threads from picking up the same message. Oracle's docs are pretty good on this package too.
I'd like iterate over every document in a (probably big) Lotus Domino database and be able to continue it from the last one if the processing breaks (network connection error, application restart etc.). I don't have write access to the database.
I'm looking for a way where I don't have to download those documents from the server which were already processed. So, I have to pass some starting information to the server which document should be the first in the (possibly restarted) processing.
I've checked the AllDocuments property and the DocumentColletion.getNthDocument method but this property is unsorted so I guess the order can change between two calls.
Another idea was using a formula query but it does not seem that ordering is possible with these queries.
The third idea was the Database.getModifiedDocuments method with a corresponding Document.getLastModified one. It seemed good but
it looks to me that the ordering of the returned collection is not documented and based on creation time instead of last modification time.
Here is a sample code based on the official example:
System.out.println("startDate: " + startDate);
final DocumentCollection documentCollection =
database.getModifiedDocuments(startDate, Database.DBMOD_DOC_DATA);
Document doc = documentCollection.getFirstDocument();
while (doc != null) {
System.out.println("#lastmod: " + doc.getLastModified() +
" #created: " + doc.getCreated());
doc = documentCollection.getNextDocument(doc);
}
It prints the following:
startDate: 2012.07.03 08:51:11 CEDT
#lastmod: 2012.07.03 08:51:11 CEDT #created: 2012.02.23 10:35:31 CET
#lastmod: 2012.08.03 12:20:33 CEDT #created: 2012.06.01 16:26:35 CEDT
#lastmod: 2012.07.03 09:20:53 CEDT #created: 2012.07.03 09:20:03 CEDT
#lastmod: 2012.07.21 23:17:35 CEDT #created: 2012.07.03 09:24:44 CEDT
#lastmod: 2012.07.03 10:10:53 CEDT #created: 2012.07.03 10:10:41 CEDT
#lastmod: 2012.07.23 16:26:22 CEDT #created: 2012.07.23 16:26:22 CEDT
(I don't use any AgentContext here to access the database. The database object comes from a session.getDatabase(null, databaseName) call.)
Is there any way to reliably do this with the Lotus Domino Java API?
If you have access to change the database, or could ask someone to do so, then you should create a view that is sorted on a unique key, or modified date, and then just store the "pointer" to the last document processed.
Barring that, you'll have to maintain a list of previously processed documents yourself. In that case you can use the AllDocuments property and just iterate through them. Use the GetFirstDocument and GetNextDocument as they are reportedly faster than GetNthDocument.
Alternatively you could make two passes, one to gather a list of UNIDs for all documents, which you'll store, and then make a second pass to process each document from the list of UNIDs you have (using GetDocumentByUNID method).
I don't use the Java API, but in Lotusscript, I would do something like this:
Locate a view displaying all documents in the database. If you want the agent to be really fast, create a new view. The first column should be sorted and could contain the Universal ID of the document. The other columns contains all the values you want to read in your agent, in your example that would be the created date and last modified date.
Your code could then simply loop through the view like this:
lastSuccessful = FunctionToReadValuesSomewhere() ' Returns 0 if empty
Set view = thisdb.GetView("MyLookupView")
Set col = view.AllEntries
Set entry = col.GetFirstEntry
cnt = 0
Do Until entry is Nothing
cnt = cnt + 1
If cnt > lastSuccessful Then
universalID = entry.ColumnValues(0)
createDate = entry.ColumnValues(1)
lastmodifiedDate = entry.ColumnValues(2)
Call YourFunctionToDoStuff(universalID, createDate, lastmodifiedDate)
Call FunctionToStoreValuesSomeWhere(cnt, universalID)
End If
Set entry = col.GetFirstEntry
Loop
Call FunctionToClearValuesSomeWhere()
Simply store the last successful value and Universal ID in say a text file or environment variable or even profile document in the database.
When you restart the agent, have some code that check if the values are blank (then return 0), otherwise return the last successful value.
Agents already keep a field to describe documents that they have not yet processed, and these are automatically updated via normal processing.
A better way of doing what you're attempting to do might be to store the results of a search in a profile document. However, if you're trying to relate to documents in a database you do not have write permission to, the only thing you can do is keep a list of the doclinks you've already processed (and any information you need to keep about those documents), or a sister database holding one document for each doclink plus multiple fields related to the processing you've done on them. Then, transfer the lists of IDs and perform the matching on the client to do per-document lookups.
Lotus Notes/Domino databases are designed to be distributed across clients and servers in a replicated environment. In the general case, you do not have a guarantee that starting at a given creation or mod time will bring you consistent results.
If you are 100% certain that no replicas of your target database are ever made, then you can use getModifiedDocuments and then write a sort routine to place (modDateTime,UNID) pairs into a SortedSet or other suitable data structure. Then you can process through the Set, and if you run into an error you can save the modDateTime of the element that you were attempting to process as your restart point. There may be a few additional details for you to work out to avoid duplicates, however, if there are multiple documents with the exact same modDateTime stamp.
I want to make one final remark. I understand that you are asking about Java, but if you are working on a backup or archiving system for compliance purposes, the Lotus C API has special functions that you really should look at.
I'm making Swing database app based on EJB 3 technology. I'm using Netbeans 7.0.1. When program is starting up it's fetching all the data from database:
private javax.persistence.Query spareQuery;
private java.util.List<Spares> spareList;
...
spareQuery = entityManager.createQuery("SELECT s FROM Spares s ORDER BY s.id");
spareList = org.jdesktop.observablecollections.ObservableCollections.
observableList(spareQuery.getResultList());
Fetching all the data from a database causes to significant pause in start-up process.
For now, I need a wrapper for javax.persistence.Query interface which will do the following:
Initialization:
spareQuery = entityManager.createQuery("SELECT s FROM Spares s ORDER BY s.id");
spareQuery = new MyQueryWrapper ( spareQuery );
Main part! After that when this called:
spareList = org.jdesktop.observablecollections.ObservableCollections.
observableList(spareQuery.getResultList());
Instead of waiting all data received from the server, Query instance should split the query into chunks and after every chunk retrieved add data to the list (as list is observable, every portion of data will appear in associated JTable). As result, we'll have soft and fast start-up.
Thereby, logic of working should be like this:
SELECT s FROM Spares s ORDER BY s.id WHERE s.id BETWEEN 1 and 20
Add data to the list.
...
SELECT s FROM Spares s ORDER BY s.id WHERE s.id BETWEEN 80 and 100
Add data to the list.
QUESTION: Is there any library which can replace (wrap) EntityManager, Query or something else to achieve soft asynchronous data fetching from database using EJB3 technology?
Why do you need a library for that. Just start your EntityManager instance and your Query execution in another thread and then bring the return values back to Swing's thread when they're available. You could use Swing Worker or ExecutorService to implement this but for such a simple task you might be better off just starting a thread with a callback.