I want to create a search for books with ElasticSearch and SpringData.
I index my books with ISBN/EAN without hyphens and save it in my database. This data I index with ElasticSearch.
Indexed data: 1113333444444
If I'm search for a ISBN/EAN with hyphen: 111-3333-444444
There is no result. If I'm searching without hyphen, my book will be found as expected.
My settings are like this:
{
"analysis": {
"filter": {
"clean_special": {
"type": "pattern_replace",
"pattern": "[^a-zA-Z0-9]",
"replacement": ""
}
},
"analyzer": {
"isbn_search_analyzer": {
"type": "custom",
"tokenizer": "keyword",
"filter": [
"clean_special"
]
}
}
}
}
I index my fields like this:
#Field(type = FieldType.Keyword, searchAnalyzer = "isbn_search_analyzer")
private String isbn;
#Field(type = FieldType.Keyword, searchAnalyzer = "isbn_search_analyzer")
private String ean;
If I test my analyzer:
GET indexname/_analyze
{
"analyzer" : "isbn_search_analyzer",
"text" : "111-3333-444444"
}
I get following result:
{
"tokens" : [
{
"token" : "1113333444444",
"start_offset" : 0,
"end_offset" : 15,
"type" : "word",
"position" : 0
}
]
}
If I'm search like this:
GET indexname/_search
{
"query": {
"query_string": {
"fields": [ "isbn", "ean" ],
"query": "111-3333-444444"
}
}
}
I don't get any result. Have someone of you an idea?
As mentioned by #P.J.Meisch, you have done everything correct, but missed defining your field data type to text, when you define them as keyword, even though you are explicitly telling ElasticSearch to use your custom-analyzer isbn_search_analyzer, it will be ignored.
Working example on your sample data when field is defined as text.
Index mapping
{
"settings": {
"analysis": {
"filter": {
"clean_special": {
"type": "pattern_replace",
"pattern": "[^a-zA-Z0-9]",
"replacement": ""
}
},
"analyzer": {
"isbn_search_analyzer": {
"type": "custom",
"tokenizer": "keyword",
"filter": [
"clean_special"
]
}
}
}
},
"mappings": {
"properties": {
"isbn": {
"type": "text",
"analyzer": "isbn_search_analyzer"
},
"ean": {
"type": "text",
"analyzer": "isbn_search_analyzer"
}
}
}
}
Index Sample records
{
"isbn" : "111-3333-444444"
}
{
"isbn" : "111-3333-2222"
}
Search query
{
"query": {
"query_string": {
"fields": [
"isbn",
"ean"
],
"query": "111-3333-444444"
}
}
}
And search response
"hits": [
{
"_index": "65780647",
"_type": "_doc",
"_id": "1",
"_score": 0.6931471,
"_source": {
"isbn": "111-3333-444444"
}
}
]
Elasticsearch does not analyze fields of type keyword. You need to set the type to text.
Related
I have created the below normalizer for the field code to query the elasticsearch based on both upper and lower case.
PUT my_index12
{
"settings": {
"analysis": {
"normalizer": {
"my_normalizer": {
"type": "custom",
"char_filter": [],
"filter": ["lowercase", "asciifolding"]
}
}
}
},
"mappings": {
"doc": {
"properties": {
"code": {
"type": "keyword",
"normalizer": "my_normalizer"
}
}
}
}
}
And am trying to search using wildcard query, am not getting any results, without normalizer am able to get with the uppercase, exactly how it is there in elasticsearch
get my_index12/_search
{
"query": {
"wildcard": {
"code.keyword": {
"value": "*AB-7000-5000-Wk-21*"
}
}
}
}
Please find my index below
{
"_index": "my_index12",
"_type": "doc",
"_id": "2",
"_score": 1,
"_source": {
"code": "ABCq123S"
}
},
{
"_index": "my_index12",
"_type": "doc",
"_id": "1",
"_score": 1,
"_source": {
"code": "AB-7000-5000-Wk-21"
}
}
If i try to do the mapping for code.keyword
"mappings": {
"doc": {
"properties": {
"code.keyword": {
"type": "keyword",
"normalizer": "my_normalizer"
}
}
am getting the below error while inserting documents into the index
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "object mapping for [code] tried to parse field [code] as object, but found a concrete value"
}
],
"type": "mapper_parsing_exception",
"reason": "object mapping for [code] tried to parse field [code] as object, but found a concrete value"
},
"status": 400
}
I am trying to configure elastic search with synonyms.
These are my settings:
"analysis": {
"analyzer": {
"category_synonym": {
"tokenizer": "whitespace",
"filter": [
"synonym_filter"
]
}
},
"filter": {
"synonym_filter": {
"type": "synonym",
"synonyms_path": "synonyms.txt"
}
}
}
Mappings config:
"category": {
"properties": {
"name": {
"type":"string",
"search_analyzer" : "category_synonym",
"index_analyzer" : "standard",
"fields": {
"raw": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
And the list of my synonyms
film => video,
ooh => panels , poster,
commercial => advertisement,
print => magazine
I must say that I am using Elasticsearch Java API.
I am using QueryBuilders.queryStringQuery because this is the only way how I set analyzers to my request.
So, when I am making:
QueryBuilders.queryStringQuery("name:film").analyzer(analyzer)
It returns me
[
{
"id": 71,
"name": "Pitch video",
"description": "... ",
"parent": null
},
{
"id": 25,
"name": "Video",
"description": "... ",
"parent": null
}
]
That is perfect for me, but when I am calling something like this
QueryBuilders.queryStringQuery("name:vid").analyzer(analyzer)
I expect that it should return same objects, but there is nothing: []
So, I added asterisk to queryStringQuery:
QueryBuilders.queryStringQuery("name:vid*").analyzer(analyzer)
Works well, but now
QueryBuilders.queryStringQuery("name:film*").analyzer(analyzer)
returns me []
So, how can I configure my elastic search that it will return same objects when I am searching video, vid, film and fil?
Thanks in advance!
Hm, I don't think Elasticsearch will know to "translate" fil into vid :-). So, I think you need edgeNGrams for this, both at indexing and search time.
PUT test
{
"settings": {
"analysis": {
"analyzer": {
"category_synonym": {
"tokenizer": "whitespace",
"filter": [
"synonym_filter",
"my_edgeNGram_filter"
]
},
"standard_edgeNGram": {
"tokenizer": "standard",
"filter": [
"lowercase",
"synonym_filter",
"my_edgeNGram_filter"
]
}
},
"filter": {
"synonym_filter": {
"type": "synonym",
"synonyms_path": "synonyms.txt"
},
"my_edgeNGram_filter": {
"type": "edgeNGram",
"min_gram": 2,
"max_gram": 8
}
}
}
},
"mappings": {
"test": {
"properties": {
"name": {
"type": "string",
"analyzer": "category_synonym",
"index_analyzer": "standard_edgeNGram",
"fields": {
"raw": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
}
}
POST test/test/1
{"name": "Pitch video"}
POST test/test/2
{"name": "Video"}
GET /test/test/_search
{
"query": {
"query_string": {
"query": "name:fil"
}
}
}
I have JsonObjects that i search with Elasticsearch from a Java Application, using the Java API to build searchQueries. The objects contain a field called "such" that contains a searchString with which the JsonObject should be found, for example a simple searchString would be "STVBBM160A". Besides the usual characters a-Z 0-9 the searchString could also look like the following examples:
"STV-157ABR", "F-G/42-W3" or "DDM000.074.6652"
The search should return results already when only the first characters are put into a searchfield, which it does for a search like "F-G/42"
My Problem: The search sometimes doesn't return results at all, but when the last character is typed it finds the right document.
What i tried: First I wanted to use a WildcardQuery where the query would be "typedStuff*", but the WildcardQuery didn't return any results at all, as soon as I typed anything but * (It used to work for other searchFields with other values)
Now I am using a QueryStringQuery, which also takes the input and puts a * character to the end. By escaping the QueryString, I am able to search for Strings like "F-G/42" and so on, but the search for "DDM000.074.6652" doesn't return any results until elasticsearch has the whole String to search. Also, when i type "STV" all results with "STV-xxxxx" (containing the "-" after STV) are returned, but not the object with "STVBBM160A", again until the whole String is given for the search (without showing any results inbetween as soon as the searchString is "STVB")
This is the query I'm using right now:
{
"size": 1000,
"min_score": 1,
"query": {
"bool": {
"must": [
{
"query_string": {
"query": "MY_DATA_TYPE",
"fields": [
"doc.db_doc_type"
]
}
},
{
"query_string": {
"query": "MY_SPECIFIC_TYPE",
"fields": [
"doc.db_doc_specific"
]
}
}
],
"should": {
"query_string": {
"query": "STV*",
"fields": [
"doc.such"
],
"boost": 3,
"escape": true
}
}
}
}
}
This is the old Query with the WildCardQuery, which doesn't return any results at all unless there is no queryString but *:
{
"size": 50,
"min_score": 1,
"query": {
"bool": {
"must": [
{
"query_string": {
"query": "MY_DATA_TYPE",
"fields": [
"doc.db_doc_type"
]
}
},
{
"query_string": {
"query": "MY_SPECIFIC_TYPE",
"fields": [
"doc.db_doc_specific"
]
}
}
],
"should": {
"wildcard": {
"doc.such": {
"wildcard": "STV*",
"boost": 3
}
}
}
}
}
}
When using a PrefixQuery, the search also doesn't return any results at all (with and without the *):
{
"size": 50,
"min_score": 1,
"query": {
"bool": {
"must": [
{
"query_string": {
"query": "MY_DATA_TYPE",
"fields": [
"doc.db_doc_type"
]
}
},
{
"query_string": {
"query": "MY_SPECIFIC_TYPE",
"fields": [
"doc.db_doc_specific"
]
}
}
],
"should": {
"prefix": {
"doc.such": {
"prefix": "HSTKV*",
"boost": 3
}
}
}
}
}
}
How can this query be changed to achieve the goal of getting all results starting with the specified String, no matter if the field doc.such also contains Numbers or special chars like "_" or "." or "/" ?
Thanks in advance
As soon as you want to query prefixes, suffixes or substring in a serious way, you need to leverage nGrams. In your case, since you're only after prefixes, an edgeNGram tokenizer would be in order. You need to change the settings of your index to be like this one:
PUT your_index
{
"settings": {
"analysis": {
"analyzer": {
"prefix_analyzer": {
"tokenizer": "prefix_tokenizer",
"filter": [
"lowercase"
]
},
"search_prefix_analyzer": {
"tokenizer": "keyword",
"filter": [
"lowercase"
]
}
},
"tokenizer": {
"prefix_tokenizer": {
"type": "edgeNGram",
"min_gram": "1",
"max_gram": "25"
}
}
}
},
"mappings": {
"your_type": {
"properties": {
"doc": {
"properties": {
"such": {
"type": "string",
"fields": {
"starts_with": {
"type": "string",
"analyzer": "prefix_analyzer",
"search_analyzer": "search_prefix_analyzer"
}
}
}
}
}
}
}
}
}
What will happen with this analyzer is that when indexing F-G/42-W3 the following tokens will be indexed: f, f-, f-g, f-g/, f-g/4, f-g/42, f-g/42-, f-g/42-w, f-g/42-w3.
At search time, we'll simply lowercase the user input and the prefix will be matched against the indexed tokens.
Then your query can simply be transformed to a match query:
{
"size": 1000,
"min_score": 1,
"query": {
"bool": {
"must": [
{
"query_string": {
"query": "MY_DATA_TYPE",
"fields": [
"doc.db_doc_type"
]
}
},
{
"query_string": {
"query": "MY_SPECIFIC_TYPE",
"fields": [
"doc.db_doc_specific"
]
}
}
],
"should": {
"match": {
"doc.such": {
"query": "F-G/4"
}
}
}
}
}
}
I have the below mapping structure for my Elasticsearch index.
{
"users": {
"mappings": {
"user-type": {
"properties": {
"lastModifiedBy": {
"type": "string"
},
"lastModifiedDate": {
"type": "date",
"format": "dateOptionalTime"
},
"details": {
"type": "nested",
"properties": {
"lastModifiedBy": {
"type": "string"
},
"lastModifiedDate": {
"type": "date",
"format": "dateOptionalTime"
},
"views": {
"type": "nested",
"properties": {
"id": {
"type": "string"
},
"name": {
"type": "string"
},
"properties": {
"properties": {
"name": {
"type": "string"
},
"type": {
"type": "string"
},
"value": {
"type": "string"
}
}
}
}
}
}
}
}
}
}
}
}
Basically I want to retrieve ONLY the view object inside details based on index id & view id(details.views.id).
I have tried with the below java code.But seems to be not working.
SearchRequestBuilder srq = this.client.prepareSearch(this.indexName)
.setTypes(this.type)
.setQuery(QueryBuilders.termQuery("_id", sid))
.setPostFilter(FilterBuilders.nestedFilter("details.views",
FilterBuilders.termFilter("details.views.id", id)));
Below is the query structure for this java code.
{
"query": {
"term": {
"_id": "123"
}
},
"post_filter": {
"nested": {
"filter": {
"term": {
"details.views.id": "def"
}
},
"path": "details.views"
}
}
}
Since details is nested and view is nested inside details, you basically need two nested filters as well (one for each level) + the constraint on the _id field is best done with the ids query. The query DSL would look like this:
{
"query": {
"ids": {
"values": [
"123"
]
}
},
"post_filter": {
"nested": {
"filter": {
"nested": {
"path": "details.view",
"filter": {
"term": {
"details.views.id": "def"
}
}
}
},
"path": "details"
}
}
}
Translating this into Java code yields:
// 2nd-level nested filter
FilterBuilder detailsView = FilterBuilders.nestedFilter("details.views",
FilterBuilders.termFilter("details.views.id", id));
// 1st-level nested filter
FilterBuilder details = FilterBuilders.nestedFilter("details", detailsView);
// ids constraint
IdsQueryBuilder ids = QueryBuilders.idsQuery(this.type).addIds("123");
SearchRequestBuilder srq = this.client.prepareSearch(this.indexName)
.setTypes(this.type)
.setQuery(ids)
.setPostFilter(details);
PS: I second what #Paul said, i.e. always play around with the query DSL first and when you know you have zeroed in on the exact query you need, then you can translate it to the Java form.
I have configured the my index with the following settings and the matchAll query results the have a value "trial" in the field IPRANGE.
The settings:
{
"settings" : {
"analysis": {
"filter": {
"autocomplete_filter": {
"type": "edge_ngram",
"min_gram": 1,
"max_gram": 5
}
},
"analyzer": {
"autocomplete": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"autocomplete_filter"
]
}
}
}
},
"mappings" : {
"users" : {
"properties" : {
"IPRANGE" : {
"type" : "string",
"analyzer" : "autocomplete"
}
}
}
},
refresh_interval: "1000"
}
But when I search with following payload it doesn't return results, ie is 0 hits.
URL:
http://xxxxxx:9200/db2/users/_search
Payload:
{
"query": {
"match": {
"IPRANGE": "tr"
}
}
}
What could be the issue?
How have you indexed the document? Here is an example that works:
I changed the mapping so that the autocomplete analyzer is used to index the IPRANGE field, when searching against the field the default analyzer will be used (you don't want to split the search term in same way).
/POST http://localhost:9200/test
{
"settings": {
"analysis": {
"filter": {
"autocomplete_filter": {
"type": "edge_ngram",
"min_gram": 1,
"max_gram": 5
}
},
"analyzer": {
"autocomplete": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"autocomplete_filter"
]
}
}
}
},
"mappings": {
"users": {
"properties": {
"IPRANGE": {
"type": "string",
"search_analyzer": "autocomplete"
}
}
}
}
}
Index the document
/POST http://localhost:9200/test/users/1/
{
"IPRANGE":"trial"
}
Search request:
/POST http://localhost:9200/test/users/_search
{
"query": {
"match": {
"IPRANGE": "tr"
}
}
}
Returns the following result:
{
took: 10
timed_out: false
_shards: {
total: 5
successful: 5
failed: 0
}
hits: {
total: 1
max_score: 0.30685282
hits: [
{
_index: test
_type: users
_id: 1
_score: 0.30685282
_source: {
IPRANGE: trial
}
}
]
}
}