Where to put index-re-aliasing when re-indexing in the background? - java

I try to re-index an ES index with Java:
// reindex all documents from the old into the new index
QueryBuilder qb = QueryBuilders.matchAllQuery();
SearchResponse scrollResp = client.prepareSearch("my_index").setSearchType(SearchType.SCAN).setScroll(new TimeValue(600000)).setQuery(qb).setSize(100).execute().actionGet();
while (true) {
scrollResp = client.prepareSearchScroll(scrollResp.getScrollId()).setScroll(new TimeValue(600000)).execute().actionGet();
final int documentFoundCount = scrollResp.getHits().getHits().length;
// Break condition: No hits are returned
if (documentFoundCount == 0) {
break;
}
// otherwise add all documents which are found (in this scroll-search) to a bulk operation for reindexing.
logger.info("Found {} documents in the scroll search, re-indexing them via bulk now.", documentFoundCount);
BulkRequestBuilder bulk = client.prepareBulk();
for (SearchHit hit : scrollResp.getHits()) {
bulk.add(new IndexRequest(newIndexName, hit.getType()).source(hit.getSource()));
}
bulk.execute(new ActionListener<BulkResponse>() {
#Override public void onResponse(BulkResponse bulkItemResponses) {
logger.info("Reindexed {} documents from '{}' to '{}'.", bulkItemResponses.getItems().length, currentIndexName, newIndexName);
}
#Override public void onFailure(Throwable e) {
logger.error("Could not complete the index re-aliasing.", e);
}
});
}
// these following lines should only be executed if the re-indexing was successful for _all_ documents.
logger.info("Finished re-indexing all documents, now setting the aliases from the old to the new index.");
try {
client.admin().indices().aliases(new IndicesAliasesRequest().removeAlias(currentIndexName, "my_index").addAlias("my_index", newIndexName)).get();
// finally, delete the old index
client.admin().indices().delete(new DeleteIndexRequest(currentIndexName)).actionGet();
} catch (InterruptedException | ExecutionException e) {
logger.error("Could not complete the index re-aliasing.", e);
}
In general, this works, but the approach has one problem:
If there is a failure during re-indexing, e.g. it takes too long and is stopped by some transaction watch (it runs during EJB startup), the alias is re-set and the old index is nevertheless removed.
How can I do that alias-re-setting if and only if all bulk requests were successful?

You're not waiting until the bulk request finishes. If you call execute() without actionGet(), you end up running asynchronously. Which means you will start changing aliases and deleting indexes before the new index is completely built.
Also:
client.admin().indices().aliases(new IndicesAliasesRequest().removeAlias(currentIndexName, "my_index").addAlias("my_index", newIndexName)).get();
This should be ended with execute().actionGet() and not get(). that is probably why your alias is not getting set

Related

Fetching new documents on insertion in ElasticSearch with Java

I have been looking for a solution to create a sort of alert when new documents are added to ES via Logstash. I have seen some threads on here such as : stackoverflow.com/a/51980618/4604579, but that does not really serve my purposes as the plug-ins mentioned do not work with the newest version of ELK and there is no Changes API out yet.
So I have resorted to trying 2 different approaches:
Create a Scroll and run over all the documents in a given index using the Search API, retain the last document's ID and use it after a given timeout period to get all documents that were added after it
Creating a Watcher that checks after a given interval (for example 5 minutes) if new documents have been added to an index.
I have advanced on approach 1, where I can scroll through about 50k documents that are currently in ES and retrieve the last documents id (i sort the query based on timestamp in ascending order, that way I know that the last document will be the latest that was inserted). But I don't know how efficient this approach is and I know that a scroller may time out after a given delay, so if no new documents are inserted, that means the scroll will be removed.
I was looking also into using a Watcher, but I don't really understand how I can set up the condition to check if a new document was inserted in a given index.
I imagine I can do something of the genre:
PUT _watcher/watch/new_docs
{
"trigger" : {
"schedule" : {
"interval" : "5s"
}
},
"input" : {
"search" : {
"request" : {
"indices" : "logstash",
"body" : {
"size" : 0,
"query" : { "match" : { "#timestamp" : "now-5s" } }
}
}
}
},
"condition" : {
"compare" : { ?? }
},
"actions" : {
"my_webhook" : {
"webhook" : {
"method" : "POST",
"host" : "mylisteninghost",
"port" : 9200,
"path" : "/{{watch_id}}",
"body" : "New document {{document ID}} errors"
}
}
I am not exactly sure how to define or use the Watcher and if it would even work.
Can anyone let me know what the best course of action would be?
Thank you
EDIT:
For those interested I found a way to poll the ES REST API using Search After. The difference is that using Scroll, there is a snapshot taken of the documents in the ES DB, so any new documents added wont be in this snapshot. Contrary to that, Search After is state-less, which means that it will use unique sorting parameters (in my case timestamp/id) and hold the last one fetched, afterwards we query all documents that come after the held parameters. This way if any new documents are added, they will come after the held timestamp and will be fetched by the query.
Code:
public static void searchAfterElasticData()
throws FileNotFoundException, IOException, InterruptedException {
//create a search request for a given index
SearchRequest search_request = new SearchRequest(elastic_index);
SearchSourceBuilder source_builder =
getSearchSourceBuilder("#timestamp", "_id", 100);
search_request.source(source_builder);
SearchResponse search_response = null;
try {
search_response = client.search(search_request, RequestOptions.DEFAULT);
} catch (ElasticsearchException | ConnectException ex) {
log.info("Error while querying Elastic API: {}", ex.toString());
}
if (search_response != null) {
SearchHit[] search_hits = search_response.getHits().getHits();
Object[] sort_values = null;
while (search_hits != null) {
if (search_hits.length > 0) {
//if there are records retrieved, parse them
for (SearchHit hit: search_hits) {
Map<String, Object> source_map = hit.getSourceAsMap();
try {
parse((String)source_map.get("message"));
} catch (Exception ex) {
log.error("Error while parsing: {}",
(String)source_map.get("message"));
}
}
//get sorting value of last record and do new request
log.info("Getting sorting values");
sort_values = search_response.getHits()
.getAt(search_hits.length-1).getSortValues();
} else {
log.info("Waiting 1 minute for new entries");
Thread.sleep(60000);
}
source_builder.searchAfter(sort_values);
search_request.source(source_builder);
search_response =
client.search(search_request, RequestOptions.DEFAULT);
search_hits = search_response.getHits().getHits();
log.info("Fetched hits: {}", search_hits.length);
log.info("Searching after for new hits");
}
}
}
I still would like to know if it is possible to do the same using a Watcher, also if anyone has any suggestions to make the code more elegant, please share.
Thank you

Getting "collection already exists" in Solr logging when creating collection through java code

I have 8 solr(8.2.0 version) servers and 5 ZooKeepers (3.4.14 version) in my project.
I am creating solr collections programmatically using spring boot and running some backend logic. I am running this spring boot code on a schedule basis using airflow 5 times a day. In a day the collection name is unique something like 'name_20200805' for all 5 schedules in a day. So for the first schedule run when the collection is getting created i am getting the below error even though collection is getting created for first time on that day. If collection already exists i am skipping creating the collection for other 4 schedules in a day.
schedule for airflow is airflow_dag 0 2,4,6,8,10 * * * . this schedule is PDT timings. The spring boot jar is runing in databricks.
method used to create the collection:
boolean doProcess(Record record) {
Record outputRecord = record.copy();
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("YYYYMMdd");
String newCollectionName = "name"+ "_" + LocalDate.now().format(formatter);
CloudSolrClient solr = new CloudSolrClient.Builder(zkHost, Optional.empty()).build();
List<String> currentCollections = currentCollections(solr);
if(!currentCollections.contains(newCollectionName)) {
if (createTimestampedCollection(newCollectionName, solr)) {
//some code here
}
}
outputRecord.put("solrCollection", newCollectionName);
try {
solr.close();
} catch (IOException e) {
e.printStackTrace();
}
return super.doProcess(outputRecord);
}
boolean createTimestampedCollection(String newCollectionName, CloudSolrClient solr) {
final CollectionAdminRequest.Create createCollection = CollectionAdminRequest.Create.createCollection(newCollectionName, configSet, numShards, numReplicas);
createCollection.setBasicAuthCredentials(username, password);
createCollection.setMaxShardsPerNode(numReplicas);
CollectionAdminResponse adminResponse = null;
try {
adminResponse = createCollection.process(solr);
} catch (SolrServerException e) {
e.printStackTrace();
return false;
} catch (IOException e) {
e.printStackTrace();
return false;
}
return true;
}
Can someone help me to fix the below issue
Collection: name_20200805 operation: create failed:org.apache.solr.common.SolrException: collection already exists: name_20200805
at org.apache.solr.cloud.api.collections.CreateCollectionCmd.call(CreateCollectionCmd.java:116)
at org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:264)
at org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:505)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Bulk write in mongoDB using vertx

I am using BulkWriteWithOptions for inserting multiple documents in DB. I want the inserted documents so that I can know which one was inserted, failed or duplicate document.
Following is the piece of code I am using
mongoClient.bulkWriteWithOptions(collection, operations, options, repoAsyncResult -> {
if (repoAsyncResult.failed()) {
LOGGER.error("Bulk insertion failed : {}", repoAsyncResult.cause().getMessage());
if(repoAsyncResult.cause() instanceof MongoBulkWriteException ){
MongoBulkWriteException exception = (MongoBulkWriteException)repoAsyncResult.cause() ;
exception.getWriteErrors().forEach(error -> {
LOGGER.error("Insert Error : " + error.getMessage());
});
}
repoFuture.fail(repoAsyncResult.cause());
} else {
LOGGER.info("Bulk insertion successful : {}", repoAsyncResult.result().toJson());
repoFuture.complete(repoAsyncResult.result().toJson());
}
});
Is there any way to get the inserted documents as a result?
No, you can only get the IDs of upserted documents from repoAsyncResult.result().getUpserts() (a List<JsonObject> whose .getAsString("_id") will return the upserted IDs.)

FirestoreException: Backend ended Listen Stream

I'm trying to use Firestore in order to set up realtime listeners for a collection. Whenever a document is added, modified, or deleted in a collection, I want the listener to be called. My code is currently working for one collection, but when I try the same code on a larger collection, it fails with the error:
Listen failed: com.google.cloud.firestore.FirestoreException: Backend ended Listen stream: The datastore operation timed out, or the data was temporarily unavailable.
Here's my actual listener code:
/**
* Sets up a listener at the given collection reference. When changes are made in this collection, it writes a flat
* text file for import into backend.
* #param collectionReference The Collection Reference that we want to listen to for changes.
*/
public static void listenToCollection(CollectionReference collectionReference) {
AtomicBoolean initialUpdate = new AtomicBoolean(true);
System.out.println("Initializing listener for: " + collectionReference.getId());
collectionReference.addSnapshotListener(new EventListener<QuerySnapshot>() {
#Override
public void onEvent(#Nullable QuerySnapshot queryDocumentSnapshots, #Nullable FirestoreException e) {
// Error Handling
if (e != null) {
System.err.println("Listen failed: " + e);
return;
}
// If this is the first time this function is called, it's simply reading everything in the collection
// We don't care about the initial value, only the updates, so we simply ignore the first call
if (initialUpdate.get()) {
initialUpdate.set(false);
System.out.println("Initial update complete...\nListener active for " + collectionReference.getId() + "...");
return;
}
// A document has changed, propagate this back to backend by writing text file.
for (DocumentChange dc : queryDocumentSnapshots.getDocumentChanges()) {
String docId = dc.getDocument().getId();
Map<String, Object> docData = dc.getDocument().getData();
String folderPath = createFolderPath(collectionReference, docId, docData);
switch (dc.getType()) {
case ADDED:
System.out.println("Document Created: " + docId);
writeMapToFile(docData, folderPath, "CREATE");
break;
case MODIFIED:
System.out.println("Document Updated: " + docId);
writeMapToFile(docData, folderPath, "UPDATE");
break;
case REMOVED:
System.out.println("Document Deleted: " + docId);
writeMapToFile(docData, folderPath, "DELETE");
break;
default:
break;
}
}
}
});
}
It seems to me that the collection is too large, and the initial download of the collection is timing out. Is there some sort of work around I can use in order to get updates to this collection in real time?
I reached out to the Firebase team, and they're currently getting back to me on the issue. In the meantime, I was able to reduce the size of my listener by querying the collection based on a Last Updated timestamp attribute. I only looked at documents that were recently updated, and had my app change this attribute whenever a change was made.

Azure Document DB - Java 1.9.5 | Authorization Error

I have a collection with some documents in it. And in my application I am creating this collection first and then inserting documents. Also, based on the requirement I need to truncate (delete all documents) the collection as well. Using document db java api I have written the following code for my this purpose-
DocumentClient documentClient = getConnection(masterkey, server, portNo);
List<Database> databaseList = documentClient.queryDatabases("SELECT * FROM root r WHERE r.id='" + schemaName + "'", null).getQueryIterable().toList();
DocumentCollection collection = null;
Database databaseCache = (Database)databaseList.get(0);
List<DocumentCollection> collectionList = documentClient.queryCollections(databaseCache.getSelfLink(), "SELECT * FROM root r WHERE r.id='" + collectionName + "'", null).getQueryIterable().toList();
// truncate logic
if (collectionList.size() > 0) {
collection = ((DocumentCollection) collectionList.get(0));
if (truncate) {
try {
documentClient.deleteDocument(collection.getSelfLink(), null);
} catch (DocumentClientException e) {
e.printStackTrace();
}
}
} else { // create logic
RequestOptions requestOptions = new RequestOptions();
requestOptions.setOfferType("S1");
collection = new DocumentCollection();
collection.setId(collectionName);
try {
collection = documentClient.createCollection(databaseCache.getSelfLink(), collection, requestOptions).getResource();
} catch (DocumentClientException e) {
e.printStackTrace();
}
With the above code I am able to create a new collection successfully. Also, I am able to insert documents as well in this collection. But while truncating the collection I am getting below error-
com.microsoft.azure.documentdb.DocumentClientException: The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used. Server used the following payload to sign: 'delete
colls
eyckqjnw0ae=
I am using Azure Document DB Java API version 1.9.5.
It will be of great help if you can point out the error in my code or if there is any other better way of truncating collection. I would really appreciate any kind of help here.
According to your description & code, I think the issue was caused by the code below.
try {
documentClient.deleteDocument(collection.getSelfLink(), null);
} catch (DocumentClientException e) {
e.printStackTrace();
}
It seems that you want to delete a document via the code above, but pass the argument documentLink with a collection link.
So if your real intention is to delete a collection, please using the method DocumentClient.deleteCollection(collectionLink, options).

Categories

Resources