Error while indexing data to elastic using java - java

Scenario : I am trying to index json data into elastic . I am getting an error like
17:13:38.146 [main] ERROR
com.opnlabs.lighthouse.elastic.ElasticSearchIndexer -
{"root_cause":[{"type":"illegal_argument_exception","reason":"Can't
merge a non object mapping
[map.audits.map.font-size.map.details.map.items.myArrayList.map.selector]
with an object mapping
[map.audits.map.font-size.map.details.map.items.myArrayList.map.selector]"}],"type":"illegal_argument_exception","reason":"Can't
merge a non object mapping
[map.audits.map.font-size.map.details.map.items.myArrayList.map.selector]
with an object mapping
[map.audits.map.font-size.map.details.map.items.myArrayList.map.selector]"}
What is causing the issue ? Please help
Code
JSONObject newJsonObject = new JSONObject();
JSONObject log = jsonObject.getJSONObject("audits");
JSONObject log1 = jsonObject.getJSONObject("categories");
newJsonObject.put("audits", log);
newJsonObject.put("categories", log1);
newJsonObject.put("timeStamp", time);
Index index = new Index.Builder(newJsonObject).index(mongoIndexName+"1").type("data").build();
DocumentResult a = client.execute(index);
Basically i m trying to add 3 json values into elastic index. Please help me with what i m doing wrong.

The error message means that you are trying to change an existing mapping. However, that is not possible in Elasticsearch. Once a mapping has been created, it cannot be changed.
As explained by Shay Banon himself:
You can't change existing mapping type, you need to create a new index
with the correct mapping and index the data again.
So you must create a new index to create this mapping. Depending on the situation, you either
create an additional index, or
delete the current index and re-create it from scratch.
Of course in the latter case you will lose all data in the index, so prepare accordingly.
Taken from here : Can’t merge a non object mapping with an object mapping error in machine learning(beta) module

Related

Elasticsearch update document without creating new index

I have many existing indices partition by date. Eg: index_190901, index_190902,...
And I have an API which takes index_name and doc_id as inputs. User want to update some documents in index by input fields, index_name, doc_id.
I'm trying to update document using the following code:
updateRequest.index("invalid_daily_index")
.type("type")
.id("id")
.doc(jsonMap)
It works fine if user input existing index but if user input non-existing index, new index with no document will be created.
I know that I can setup auto_create_index but I still want to create index automatically when I insert new documents.
Check if index is existed with client.indices.exists(request, RequestOptions.DEFAULT) is quite expensive. I don't want to check it every request
How to make Elasticsearch to not create new index when I use updateRequest.
You can block the option to automaticaly create non existing indices by putting false to the action.auto_create_index setting of the cluster
PUT _cluster/settings
{
"persistent" : { "action.auto_create_index” : "false" }
}
For details take a look at the reference

Problems with queryBuilder greenDAO

I am using latest version of GreenDAO... I am missing something on using the data from the DB.
I need to prevent the creation of records that have the same PROFILE_NUMBER. Currently during testing I have inserted 1 record with the PROFILE_NUMBER of 1.
I need someone to show me an example of how to obtain the actual value of the field from the db.
I am using this
SvecPoleDao svecPoleDao = daoSession.getSvecPoleDao();
List poles = svecPoleDao.queryBuilder().where(SvecPoleDao.Properties.Profile_number.eq(1)).list();
and it obtains something... this.
[com.example.bobby.poleattachmenttest2_workingdatabase.db.SvecPole#bfe830c3.2]
Is this serialized? The actual value I am looking for here is 1.
Here is the solution.You'll need to use listlazy() instead of list().
List<SvecPole> poles = svecPoleDao.queryBuilder().where(SvecPoleDao.Properties.Profile_number.eq(1)).listLazy();

Java Couchbase Querying to find a document's ID?

I'm new to couchbase. I'm using Java for this. I'm trying to remove a document from a bucket by looking up its ID with query parameters(assuming the ID is unknown).
Lets say I have a bucket called test-data. In that bucked I have a document with ID of 555 and Content of {"name":"bob","num":"10"}
I want to be able to remove that document by querying using 'name' and 'num'.
So far I have this (hardcoded):
String statement = "SELECT META(`test-data`).id from `test-data` WHERE name = \"bob\" and num = \"10\"";
N1qlQuery query = N1qlQuery.simple(statement);
N1qlQueryResult result = bucket.query(query);
List<N1qlQueryRow> row = result.allRows();
N1qlQueryRow res1 = row.get(0);
System.out.println(res1);
//output: {"id":"555"}
So I'm getting a json that has the document's ID in it. What would be the best way to extract that ID so that I can then remove the queryed document from the bucket using its ID? Am I doing to many steps? Is there a better way to extract the document's ID?
bucket.remove(docID)
Ideally I'd like to use something like a N1q1QueryResult to get this going but I'm not sure how to set that up.
N1qlQueryResult result = bucket.query(select("META.id").fromCurrentBucket().where((x("num").eq("\""+num+"\"")).and(x("name").eq("\""+name+"\""))));
But that isn't working at the moment.
Any help or direction would be appreciated. Thanks.
There might be a better way which is running this kind of query:
delete from `test-data` use keys '00000874a09e749ab6f199c0622c5cb0' returning raw META(`test-data`).id
or if your fields has index:
delete from `test-data` where name='bob' and num='10' returning raw META(`test-data`).id
This query deletes the specified document with given document key (which is meta.id) and returns document id of deleted document if it deletes any document. Returns empty if no documents deleted.
You can implement this query with couchbase sdk as follows:
Statement statement = deleteFrom("test-data")
.where(x("name").eq(s("bob")).and(x("num").eq(s("10"))))
.returningRaw(meta(i("test-data")).get("id"));
You can make this statement parameterized or just execute like that.

Unable to retrive Projection/Multi- Relantion field Requests.Custom_SFDCChangeReqID2 using versionone java sdk

I have been trying to retrieve information from querying a specific Asset(Story/Defect) on V1 using the VersionOne.SDK.Java.APIClient. I have been able to retrieve information like ID.Number, Status.Name but not Requests.Custom_SFDCChangeReqID2 under a Story or a Defect.
I check the metadata for:
https://.../Story?xsl=api.xsl
https://.../meta.V1/Defect?xsl=api.xsl
https://.../meta.V1/Request?xsl=api.xsl
And the naming and information looks right.
Here is my code:
IAssetType type = metaModel.getAssetType("Story");
IAttributeDefinition requestCRIDAttribute = type.getAttributeDefinition("Requests.Custom_SFDCChangeReqID2");
IAttributeDefinition idNumberAttribute = type.getAttributeDefinition("ID.Number")
Query query = new Query(type);
query.getSelection().add(requestCRIDAttribute);
query.getSelection().add(idNumberAttribute);
Asset[] results = v1Api.retrieve(query).getAssets();
String RequestCRID= result.getAttribute(requestCRIDAttribute).getValue().toString();
String IdNumber= result.getAttribute(idNumberAttribute).getValue().toString();
At this point, I can get some values for ID.Number but I am not able to retrieving any information for the value Custom_SFDCChangeReqID2.
When I run the restful query to retrieve information using a browser from a server standpoint it works and it does retrieve the information I am looking for. I used this syntax:
https://.../rest-1.v1/Data/Story?sel=Number,ID,Story.Requests.Custom_SFDCChangeReqID2,Story.
Alex: Remember that Results is an array of Asset´s, so I guess you should be accessing the information using something like
String RequestCRID= results[0].getAttribute(requestCRIDAttribute).getValue().toString();
String IdNumber= results[0].getAttribute(idNumberAttribute).getValue().toString();
or Iterate through the array.
Also notice that you have defined:
Asset[] results and not result
Hi thanks for your answer! I completely forgot about representing the loop, I was too focus on the retriving information part, yes I was actually using a loop and yes I created a temporary variable to check what I was getting from the query in the form
Because I was getting the variables one by one so I was only using the first record. My code works after all. It was just that What I was querying didn't contain any information of my use, that's why I was not finding any. Anyway thanks for your comment and observations

save and find json string in mongodb

I'm building a logging application that does the following:
gets JSON strings from many loggers continuously and saves them to a db
serves the collected data as a per logger bulk
my intention is to use a document based NoSQL storage to have the bulk structure right away. After some research I decided to go for MongoDB because of the following features:
- comprehensive functions to insert data into existing structures ($push, (capped) collection)
- automatic sharding with a key I choose (so I can shard on a per logger basis and therefore serve bulk data in no time - all data already being on the same db server)
The JSON I get from the loggers looks like this:
[
{"bdy":{
"cat":{"id":"36494h89h","toc":55,"boc":99},
"dataT":"2013-08-12T13:44:03Z","had":0,
"rng":23,"Iss":[{"id":10,"par":"dim, 10, dak"}]
},"hdr":{
"v":"0.2.7","N":2,"Id":"KBZD348940"}
}
]
The logger can send more than one element in the same array. I this example it is just one.
I started coding in Java with the mongo driver and the first problem I discovered was: I have to parse my with no doubt valid JSON to be able to save it in mongoDB. I learned that this is due to BSON being the native format of MongoDB. I would have liked to forward the JSON string to the db directly to save that extra execution time.
so what I do in a first Java test to save just this JSON string is:
String loggerMessage = "...the above JSON string...";
DBCollection coll = db.getCollection("logData");
DBObject message = (DBObject) JSON.parse(loggerMessage);
coll.insert(message);
the last line of this code causes the following exception:
Exception in thread "main" java.lang.IllegalArgumentException: BasicBSONList can only work with numeric keys, not: [_id]
at org.bson.types.BasicBSONList._getInt(BasicBSONList.java:161)
at org.bson.types.BasicBSONList._getInt(BasicBSONList.java:152)
at org.bson.types.BasicBSONList.get(BasicBSONList.java:104)
at com.mongodb.DBCollection.apply(DBCollection.java:767)
at com.mongodb.DBCollection.apply(DBCollection.java:756)
at com.mongodb.DBApiLayer$MyCollection.insert(DBApiLayer.java:220)
at com.mongodb.DBApiLayer$MyCollection.insert(DBApiLayer.java:204)
at com.mongodb.DBCollection.insert(DBCollection.java:76)
at com.mongodb.DBCollection.insert(DBCollection.java:60)
at com.mongodb.DBCollection.insert(DBCollection.java:105)
at mongomockup.MongoMockup.main(MongoMockup.java:65)
I tried to save this JSON via the mongo shell and it works perfectly.
How can I get this done in Java?
How could I maybe save the extra parsing?
What structure would you choose to save the data? Array of messages in the same document, collection of messages in single documents, ....
It didn't work because of the array. You need a BasicDBList to be able to save multiple messages. Here is my new solution that works perfectly:
BasicDBList data = (BasicDBList) JSON.parse(loggerMessage);
for(int i=0; i < data.size(); i++){
coll.insert((DBObject) data.get(i));
}

Categories

Resources