In java fetching entities with query some time return less entities in some rare case. I am using javapersistance manager. Is it ideal to use it or need to switch to low level datastore fetch to solve it?
String query = "CUID == '" + cuidKey + "' && staffKey == '" + staffKey +"'&& StartTimeLong >= "+ startDate + " && StartTimeLong < " + endDate + " && status == 'confirmed'";
List<ResultJDO> tempResultList = jdoUtils.fetchEntitiesByQueryWithRangeOrder(ResultJDO.class, query, null, null, "StartTimeLong desc");
The result returned 4 entities in rare case, but most time return all 5 entities.
jdoUtils is a PersistanceManager object.
Should I need to switch to low level datastore fetch for exact results.
I have tried researching about the library you mentioned and for similar issues and found nothing that far. It's hard to know why this is happening or how to fix it with as little information.
On the other hand, the recommended way of programmatically interact with Google Cloud Platform products is through Google's client libraries since they are already tested and assured to work in almost all cases. Furthermore, their usage allows to open Github issue's if you find any problem so that the developers could address them. For the rare cases that you need some functionality not already covered you can open a feature request or directly call the API's.
In addition to Google's libraries there are two other options for Java that are under active development. One is Objectify and the other is Catatumbo.
I would suggest switching to Java Datastore libraries. You could find examples on how to interact with Datastore in link1 and link2. Also you could find community shared code samples in this programcreek page.
Related
I'm trying to fix a little plugin that I'm making for MineCraft servers. The plugin uses code that tries to automatically adjusts to the server needs, first converting the tables on old tables, creating the new ones and after using some objects that contains human decisions to parse or update specific information to the new tables, parsing all the data that is not duplicated already to the new tables, then removing the old ones.
The code is kinda messy, I didn't had lot of time this days, but I was trying to get a free week to remake all the code of the plugin. The problem is that everything was working fine, but one day I decided to update the plugin on a server that I use for testing, that is using MySQL. The problem in this server is that I was using the same code all the time and I didn't had problems, but after some time without using it, now it's not working.
This is the part of the code that is failing:
protected boolean tables() {
boolean update = false, result = update;
if (!this.sql.execute(
"CREATE TABLE IF NOT EXISTS information(param VARCHAR(16),value VARCHAR(16),CONSTRAINT PK_information PRIMARY KEY (param));",
new Data[0]))
return false;
List<String> tlist = new ArrayList<>();
try {
this.sql.execute("SET FOREIGN_KEY_CHECKS=0;", new Data[0]);
ResultSet set = this.sql.query("SELECT value FROM information WHERE `param`='version';", new Data[0]);
String version = "";
if (set.next())
version = set.getString(1);
if (!version.equals(MMOHorsesMain.getPlugin().getDescription().getVersion())) {
update = true;
ResultSet tables = this.sql.query("SHOW TABLES;", new Data[0]);
while (tables.next()) {
String name = tables.getString(1);
if (!name.equals("information")) {
if (!this.sql.execute("CREATE TABLE " + name + "_old LIKE " + name + ";", new Data[0]))
throw new Exception();
if (!this.sql.execute("INSERT INTO " + name + "_old SELECT * FROM " + name + ";", new Data[0]))
throw new Exception();
tlist.add(name);
}
}
String remove = "";
for (String table : tlist)
remove = String.valueOf(remove) + (remove.isEmpty() ? "" : ",") + table;
this.sql.reconnect();
this.sql.execute("DROP TABLE IF EXISTS " + remove + ";", new Data[0]);
The database stores an extra data that it's the version of the plugin. I use it to check if the database is from another version and, if that's the case, regenerate the database. It's working fine on SQLite, but the only problem comes here on MySQL.
The first part gets the actual version and checks. The plugin starts disabling the foreign keys. This is not the best part but as I said, I didn't actually had time to remake all this code, also this code comes from a compiled version cause due some GitHub issues I lost part of the last updates. If it requires the update, it starts transforming every table on _old tables. Everything works fine here, data is parsed to the _old tables and is managed correctly, but the problem is when it has to removes the original tables.
DROP TABLE IF EXISTS cosmetics,horses,inventories,items,trust,upgrades;
This is the SQL statement that is used to remove the original ones, but, I don't know if it works like that, but if that's the case, the _old tables got the foreign keys that the original tables too and when I try to remove them, it doesn't allow, even if the FOREIGN_KEY_CHECKS is on 0. I also set a debug before to check if the checking was disabled and it was. To simulate the best environment where people is used to work, I'm using a prebuilder minecraft hosting from a friend, using MariaDB 10.4.12.
I'm asking him if he updated it since the last time I was preparing this server, but I'm still waiting for his answer. Anyway, even if it's a newer or older MariaDB version, what I'm trying is to make it the most elastic possible so it can be adapted to different versions without problems. Everything seems to work fine, but as I can't delete the original databases, I can't replace them with the new format.
I wish this is just an error that happens with certain DB configurations, but I'd like to get an answer of someone with knowledge to make sure I didn't upload a broken version.
Thanks you nicomp, the answered was keeping the same session. My reconnect method is not really flexible, as I came from some strange experiences of high latency and like 1 sec sessions, cause after nothing it was getting disconnected easily, and was detecting incorrectly the connection so it was reconnecting and removing the configuration of the session.
I'm implementing quite a complex search using MarkLogic Java API. I would like to enable relevance-trace (Relavance trace) to see how my results are scored. Unfortunately, I don't know how to enable it in Java API. I have tried something like:
DatabaseClient client = initClient();
var qmo = client.newServerConfigManager().newQueryOptionsManager();
var searchOptions = "<search:options xmlns=\"http://marklogic.com/appservices/search\">\n"
+ " <search-option>relevance-trace</search-option>\n"
+ " </search:options>";
qmo.writeOptions("searchOptions", new StringHandle(searchOptions).withFormat(Format.XML));
QueryManager qm = client.newQueryManager();
StructuredQueryBuilder qb = qm.newStructuredQueryBuilder("searchOptions");
// query definition
qm.search(query, new SearchHandle())
Unfortunately it ends up with following error:
"Local message: /config/query write failed: Internal Server Error. Server Message: XDMP-DOCNONSBIND:
xdmp:get-request-body(\"xml\") -- No namespace binding for prefix search at line 1 . See the
MarkLogic server error log for further detail."
My question is how to use search options in MarkLogic API, especially I'm interested in relevance-trace and simple-score
Update 1
As suggested by #Jamess Kerr I have change my options to
var searchOptions = "<options xmlns=\"http://marklogic.com/appservices/search\">\n"
+ " <search-option>relevance-trace</search-option>\n"
+ " </options>";
but unfortunately, it still doesn't work. After that change I get error:
Local message: /config/query write failed: Internal Server Error. Server Message: XDMP-UPDATEFUNCTIONFROMQUERY: xdmp:apply(function() as item()*) -- Cannot apply an update function from a query . See the MarkLogic server error log for further detail.
Your search options XML uses the search: namespace prefix but you don't define that prefix. Since you are setting the default namespace, just drop the search: prefix from the search:options open and close tags.
The original Java Query contains both syntactical and semantic issues:
First of all, it is an invalid MarkLogic XQuery in the sense that it has only query option(s) portion. Bypassing the namespace binding prefix is another wrong end of the stick.
To tweak your original query, please replace a search text in between the search:qtext tag ( the pink line ) and run the query.
Result:
Matched and Listing 2 documents:
Matched 1 locations in /medals/coin_1333113127296.xml with 94720 score:
73. …pulsating maple leaf coin another world-first, the [Royal Canadian Mint]is proud to launch a numismatic breakthrough from its ambitious and creative R&D team...
Matched 1 locations in /medals/coin_1333078361643.xml with 94720 score:
71. ...the [Royal Canadian Mint]and Royal Australian Mint have put an end to the dispute relating to the circulation coin colouring process...
Without a semantic criterion, to put it into context, your original query will be an equivalent of removing the search:qtext and performing a fuzzy search.
Note:
If you use serialised term search or search constraints instead of text search, you should get higher score results.
MarkLogic Java API operates in unfiltered mode by default, while the cts:search operates in filtered mode by default. Just be mindful of how you construct the query and the expected score in Java API.
And it is really intended for bulk data write/extract/transformation. qconsole is, in my opinion, more befitting to tune specific query and gather search score, relevance and computation details.
I'm using hibernate-search-elasticsearch 5.8.2.Final and I can't figure out how to get script fields:
https://www.elastic.co/guide/en/elasticsearch/reference/5.6/search-request-script-fields.html
Is there any way to accomplish this functionality?
This is not possible in Hibernate Search 5.8.
In Hibernate Search 5.10 you could get direct access to the REST client, send a REST request to Elasticsearch and get the result as a JSON string that you would have to parse yourself, but it is very low-level and you would not benefit from the Hibernate Search search APIs at all (no query DSL, no managed entity loading, no direct translation entity type => index name, ...).
If you want better support for this feature, don't hesitate to open a ticket on our JIRA, describing in details what you are trying to achieve and how you would have expected to be able to do that. We are currently working on Search 6.0 which brings a lot of improvements, in particular when it comes to using native features of Elasticsearch, so it just might be something we could slip into our backlog.
EDIT: I forgot to mention that, while you cannot use server-side scripts, you can still get the full source from your documents, and do some parsing in your application to achieve a similar result. This will work even in Search 5.8:
FullTextEntityManager fullTextEm = Search.getFullTextEntityManager(entityManager);
FullTextQuery query = fullTextEm.createFullTextQuery(
qb.keyword()
.onField( "tags" )
.matching( "round-based" )
.createQuery(),
VideoGame.class
)
.setProjection( ElasticsearchProjectionConstants.SCORE, ElasticsearchProjectionConstants.SOURCE );
Object[] projections = (Object[]) query.getSingleResult();
for (Object projection : projections) {
float score = (float) projection[0];
String source = (String) projection[1];
}
See this section of the documentation.
I'd like iterate over every document in a (probably big) Lotus Domino database and be able to continue it from the last one if the processing breaks (network connection error, application restart etc.). I don't have write access to the database.
I'm looking for a way where I don't have to download those documents from the server which were already processed. So, I have to pass some starting information to the server which document should be the first in the (possibly restarted) processing.
I've checked the AllDocuments property and the DocumentColletion.getNthDocument method but this property is unsorted so I guess the order can change between two calls.
Another idea was using a formula query but it does not seem that ordering is possible with these queries.
The third idea was the Database.getModifiedDocuments method with a corresponding Document.getLastModified one. It seemed good but
it looks to me that the ordering of the returned collection is not documented and based on creation time instead of last modification time.
Here is a sample code based on the official example:
System.out.println("startDate: " + startDate);
final DocumentCollection documentCollection =
database.getModifiedDocuments(startDate, Database.DBMOD_DOC_DATA);
Document doc = documentCollection.getFirstDocument();
while (doc != null) {
System.out.println("#lastmod: " + doc.getLastModified() +
" #created: " + doc.getCreated());
doc = documentCollection.getNextDocument(doc);
}
It prints the following:
startDate: 2012.07.03 08:51:11 CEDT
#lastmod: 2012.07.03 08:51:11 CEDT #created: 2012.02.23 10:35:31 CET
#lastmod: 2012.08.03 12:20:33 CEDT #created: 2012.06.01 16:26:35 CEDT
#lastmod: 2012.07.03 09:20:53 CEDT #created: 2012.07.03 09:20:03 CEDT
#lastmod: 2012.07.21 23:17:35 CEDT #created: 2012.07.03 09:24:44 CEDT
#lastmod: 2012.07.03 10:10:53 CEDT #created: 2012.07.03 10:10:41 CEDT
#lastmod: 2012.07.23 16:26:22 CEDT #created: 2012.07.23 16:26:22 CEDT
(I don't use any AgentContext here to access the database. The database object comes from a session.getDatabase(null, databaseName) call.)
Is there any way to reliably do this with the Lotus Domino Java API?
If you have access to change the database, or could ask someone to do so, then you should create a view that is sorted on a unique key, or modified date, and then just store the "pointer" to the last document processed.
Barring that, you'll have to maintain a list of previously processed documents yourself. In that case you can use the AllDocuments property and just iterate through them. Use the GetFirstDocument and GetNextDocument as they are reportedly faster than GetNthDocument.
Alternatively you could make two passes, one to gather a list of UNIDs for all documents, which you'll store, and then make a second pass to process each document from the list of UNIDs you have (using GetDocumentByUNID method).
I don't use the Java API, but in Lotusscript, I would do something like this:
Locate a view displaying all documents in the database. If you want the agent to be really fast, create a new view. The first column should be sorted and could contain the Universal ID of the document. The other columns contains all the values you want to read in your agent, in your example that would be the created date and last modified date.
Your code could then simply loop through the view like this:
lastSuccessful = FunctionToReadValuesSomewhere() ' Returns 0 if empty
Set view = thisdb.GetView("MyLookupView")
Set col = view.AllEntries
Set entry = col.GetFirstEntry
cnt = 0
Do Until entry is Nothing
cnt = cnt + 1
If cnt > lastSuccessful Then
universalID = entry.ColumnValues(0)
createDate = entry.ColumnValues(1)
lastmodifiedDate = entry.ColumnValues(2)
Call YourFunctionToDoStuff(universalID, createDate, lastmodifiedDate)
Call FunctionToStoreValuesSomeWhere(cnt, universalID)
End If
Set entry = col.GetFirstEntry
Loop
Call FunctionToClearValuesSomeWhere()
Simply store the last successful value and Universal ID in say a text file or environment variable or even profile document in the database.
When you restart the agent, have some code that check if the values are blank (then return 0), otherwise return the last successful value.
Agents already keep a field to describe documents that they have not yet processed, and these are automatically updated via normal processing.
A better way of doing what you're attempting to do might be to store the results of a search in a profile document. However, if you're trying to relate to documents in a database you do not have write permission to, the only thing you can do is keep a list of the doclinks you've already processed (and any information you need to keep about those documents), or a sister database holding one document for each doclink plus multiple fields related to the processing you've done on them. Then, transfer the lists of IDs and perform the matching on the client to do per-document lookups.
Lotus Notes/Domino databases are designed to be distributed across clients and servers in a replicated environment. In the general case, you do not have a guarantee that starting at a given creation or mod time will bring you consistent results.
If you are 100% certain that no replicas of your target database are ever made, then you can use getModifiedDocuments and then write a sort routine to place (modDateTime,UNID) pairs into a SortedSet or other suitable data structure. Then you can process through the Set, and if you run into an error you can save the modDateTime of the element that you were attempting to process as your restart point. There may be a few additional details for you to work out to avoid duplicates, however, if there are multiple documents with the exact same modDateTime stamp.
I want to make one final remark. I understand that you are asking about Java, but if you are working on a backup or archiving system for compliance purposes, the Lotus C API has special functions that you really should look at.
I got a little question about databases and android. I got this code:
sampleDB = this.openOrCreateDatabase(SAMPLE_DB_NAME, MODE_PRIVATE, null);
sampleDB.execSQL("CREATE TABLE IF NOT EXISTS " +
SAMPLE_TABLE_NAME +
" (LastName VARCHAR, FirstName VARCHAR," +
" Country VARCHAR, Age INT(3));");
sampleDB.execSQL("INSERT INTO " +
SAMPLE_TABLE_NAME +
" Values ('Makam','Sai Geetha','India',25);");
and to read:
if (c != null ) {
if (c.moveToFirst()) {
do {
String firstName = c.getString(c.getColumnIndex("FirstName"));
int age = c.getInt(c.getColumnIndex("Age"));
results.add("" + firstName + ",Age: " + age);
}while (c.moveToNext());
}
}
With this code, I make and read the database, and insert some info in it. And print it on the screen, this all works :)
Now the part I can't figure out:
I use myPHPadmin (with xampp),
I made the exact database as I do in the code.
But how do I connect, so my code reads that database.
It is a local database for now (127.0.0.1).
Is it possible to connect a local database? (if so, could you tell me how to)
Do you need PHP, or can you do everything in (Android) Java code?
I am totally new with databases, so sometimes it confusing for me.
Please put me in the good direction.
If you need more information for the question or something else, please let me know.
It is a local database for now (127.0.0.1).
In Android you have to use 10.0.2.2 or System's Static IP.
Write a PHP script (You can also use other but PHP its easy to implement) to manage the database and run this script using HTTP protocol from the android system.
These Tutorials might help you:
Step-by-Step-Method-to-Access-Webservice-from-Andr
Web Services - An XML-RPC Client for Android
As far as I'm aware there is no MySQL library for android. But you can use the HttpPost to send data to a server side script (such as PHP) and then return it in a format you can parse in your Android application.
There's a nice tutorial on how to achieve this here: http://www.helloandroid.com/tutorials/connecting-mysql-database
Here's a link to the HttpPost Documentation: http://developer.android.com/reference/org/apache/http/client/methods/HttpPost.html
Hope this helps, this is a good way to get you started communicating with external MySQL databases within an Android application.