I attempted upgrade from Hibernate Search 5.8.0.CR1 to 5.8.2.Final
and from ElasticSearch 2.4.2 to 5.6.4.
When I run my application I'm getting the following error:
Status: 400 Bad Request
Error message: {"root_cause":[{"type":"illegal_argument_exception",
reason":"Fielddata is disabled on text fields by default.
Set fielddata=true on [title] in order to load fielddata in memory by uninverting the inverted index.
Note that this can however use significant memory. Alternatively use a keyword field instead."}]
I read about Fielddata here:
https://www.elastic.co/guide/en/elasticsearch/reference/5.6/fielddata.html#_fielddata_is_disabled_on_literal_text_literal_fields_by_default
But I'm not sure how to address this issue, especially from Hibernate Search.
My title field definition looks like this:
#Field(name = "title", analyzer = #Analyzer(definition = "my_collation_analyzer"))
#Field(name = "title_polish", analyzer = #Analyzer(definition = "polish"))
protected String title;
I'm using the following analyzer definition:
#AnalyzerDef(name = "my_collation_analyzer",
tokenizer = #TokenizerDef(factory = KeywordTokenizerFactory.class), filters = { #TokenFilterDef(
name = "polish_collation", factory = ElasticsearchTokenFilterFactory.class, params = {
#org.hibernate.search.annotations.Parameter(name = "type", value = "'icu_collation'"),
#org.hibernate.search.annotations.Parameter(name = "language", value = "'pl'") }) })
(Analyzer polish comes from plugin analysis-stempel.)
Elasticsearch notes on Fielddata recommend changing the type of the field
from text to keyword, or setting fielddata=true, but I'm not sure
how to do it using Hibernate Search annotations because there are no such
properties in annotation #Field.
Update:
Thank you very much for the help on this. I changed my code to this:
#NormalizerDef(name = "my_collation_normalizer",
filters = { #TokenFilterDef(
name = "polish_collation_normalization", factory = ElasticsearchTokenFilterFactory.class, params = {
#org.hibernate.search.annotations.Parameter(name = "type", value = "'icu_collation'"),
#org.hibernate.search.annotations.Parameter(name = "language", value = "'pl'") }) })
...
#Field(name = "title_for_search", analyzer = #Analyzer(definition = "polish"))
#Field(name = "title_for_sort", normalizer = #Normalizer(definition = "my_collation_normalizer"))
#SortableField(forField = "title_for_sort")
protected String title;
Is it ok? As I understand there should be no tokenization in a normalizer, but I'm not sure what else to use instead of #TokenFilterDef and factory = ElasticsearchTokenFilterFactory.class (?).
Unfortunately I'm also getting the following error:
Error message: {"root_cause":
[{"type":"illegal_argument_exception",
"reason":"Custom normalizer [my_collation_normalizer] may not use filter
[polish_collation_normalization]"}]
I need collation for sorting, as described in my previous question here: ElasticSearch - define custom letter order for sorting
Update 2:
I tested ElasticSearch version 5.6.5 and I think it allows icu_collation in normalizers (my annotations were accepted).
If you are trying to sort on the "title" field, then maybe you forgot to mark the field as sortable using the #SortableField annotation. (More information here) [EDIT: In Hibernate Search 6 you would use #KeywordField(sortable = Sortable.YES). See here]
Also, to avoid errors and for better performance, you should consider using normalizers instead of analyzers for fields you want to sort on (such as your "title" field). This will turn your field into a keyword field, which is what the Elasticsearch logs are hinting at.
More information on normalizers in Hibernate Search is available here, and here are the Elasticsearch specifics in Hibernate Search.
You most likely kept the old schema in your Elasticsearch cluster and tried to use it in Elasticsearch 5 with Hibernate Search. This will not work.
When upgrading from Elasticsearch 2 to 5, you must take some steps to upgrade the Elasticsearch schema, in order to use it with Hibernate Search. The easiest option (by far) is to delete the indexes and reindex your whole database. You can find details in the documentation: https://docs.jboss.org/hibernate/stable/search/reference/en-US/html_single/#_upgrading_elasticsearch
Note that you may also have to delete indexes and reindex if your Elasticsearch schema was generated from a Beta version of Hibernate Search: Beta versions are unstable, and may generate an incorrect schema. They are nice for experiments, but definitely not for production environments.
Related
Edit: Solution: Upgrading to ISIS 1.17.0 and setting the property isis.persistor.datanucleus.standaloneCollection.bulkLoad=false solved the first two problems.
I am using Apache ISIS 1.16.2 and I try to store Blob/Clob content in a MariaDB database (v10.1.35). Therefore, I use the DB connector org.mariadb.jdbc.mariadb-java-client (v2.3.0) and in the code the #Persistent annotation as shown in many examples and the ISIS documentation.
Using the code below, I just get one single column named content_name (in which the Blob object is serialized in binary form) instead of the three columns content_name, content_mimetype and content_bytes.
This is the Document class with the Blob field content:
#PersistenceCapable(identityType = IdentityType.DATASTORE)
#DatastoreIdentity(strategy = IdGeneratorStrategy.IDENTITY, column = "id")
#DomainObject(editing = Editing.DISABLED, autoCompleteRepository = DocumentRepository.class, objectType = "Document")
#Getter
// ...
public class Document implements Comparable<Document> {
#Persistent(
defaultFetchGroup = "false",
columns = {
#Column(name = "content_name"),
#Column(name = "content_mimetype"),
#Column(name = "content_bytes",
jdbcType = "BLOB",
sqlType = "LONGVARBINARY")
})
#Nonnull
#Column(allowsNull = "false")
#Property(optionality = Optionality.MANDATORY)
private Blob content;
// ...
#Column(allowsNull = "false")
#Property
private Date created = new Date();
public Date defaultCreated() {
return new Date();
}
#Column(allowsNull = "true")
#Property
#Setter
private String owner;
// ...
}
This creates the following schema for the DomainObject class Document with just one column for the Blob field:
CREATE TABLE `document` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`content_name` mediumblob,
`created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`owner` varchar(255) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Normally, the class org.apache.isis.objectstore.jdo.datanucleus.valuetypes.IsisBlobMapping of the ISIS framework should do the mapping. But it seems that this Mapper is somehow not involved...
1. Question: How do I get the Blob field being split up in the three columns (as described above and in many demo projects). Even if I switch to HSQLDB, I still get only one column, so this might not be an issue with MariaDB.
2. Question: If I use a Blob/Clob field in a class that inherits from another DomainObject class, I often get a org.datanucleus.exceptions.NucleusException (stack trace see below) and I cannot make head or tail of it. What are potential pitfalls when dealing with inheritance? Why am I getting this exception?
3. Question: I need to store documents belonging to domain objects (as you might have guessed). The proper way of doing so would be to store the documents in a file system tree instead of a database (which also has by default some size limitations for object data) and reference the files in the object. In the Datanucleus documentation I found the extension serializeToFileLocation that should do exactly that. I tried it by adding the line #Extension(vendorName="datanucleus", key="serializeToFileLocation" value="document-repository") to the Blob field, but nothing happened. So my question is: Is this Datanucleus extension compatible with Apache Isis?
If this extension conflicts with Isis, would it be possible to have a javax.jdo.listener.StoreLifecycleListener or org.apache.isis.applib.AbstractSubscriber that stores the Blob on a file system before persisting the domain object to database and restoring it before loading? Are there better solutions available?
That's it for now. Thank you in advance! ;-)
The stack trace to question 2:
... (other Wicket related stack trace)
Caused by: org.datanucleus.exceptions.NucleusException: Creation of SQLExpression for mapping "org.datanucleus.store.rdbms.mapping.java.SerialisedMapping" caused error
at org.datanucleus.store.rdbms.sql.expression.SQLExpressionFactory.newExpression(SQLExpressionFactory.java:199)
at org.datanucleus.store.rdbms.sql.expression.SQLExpressionFactory.newExpression(SQLExpressionFactory.java:155)
at org.datanucleus.store.rdbms.request.LocateBulkRequest.getStatement(LocateBulkRequest.java:158)
at org.datanucleus.store.rdbms.request.LocateBulkRequest.execute(LocateBulkRequest.java:283)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.locateObjects(RDBMSPersistenceHandler.java:564)
at org.datanucleus.ExecutionContextImpl.findObjects(ExecutionContextImpl.java:3313)
at org.datanucleus.api.jdo.JDOPersistenceManager.getObjectsById(JDOPersistenceManager.java:1850)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession.loadPersistentPojos(PersistenceSession.java:1010)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession.adaptersFor(PersistenceSession.java:1603)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession.adaptersFor(PersistenceSession.java:1573)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel$Type$1.loadInBulk(EntityCollectionModel.java:107)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel$Type$1.load(EntityCollectionModel.java:93)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel.load(EntityCollectionModel.java:454)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel.load(EntityCollectionModel.java:70)
at org.apache.wicket.model.LoadableDetachableModel.getObject(LoadableDetachableModel.java:135)
at org.apache.isis.viewer.wicket.ui.components.collectioncontents.ajaxtable.CollectionContentsSortableDataProvider.size(CollectionContentsSortableDataProvider.java:68)
at org.apache.wicket.markup.repeater.data.DataViewBase.internalGetItemCount(DataViewBase.java:142)
at org.apache.wicket.markup.repeater.AbstractPageableView.getItemCount(AbstractPageableView.java:235)
at org.apache.wicket.markup.repeater.AbstractPageableView.getRowCount(AbstractPageableView.java:216)
at org.apache.wicket.markup.repeater.AbstractPageableView.getViewSize(AbstractPageableView.java:314)
at org.apache.wicket.markup.repeater.AbstractPageableView.getItemModels(AbstractPageableView.java:99)
at org.apache.wicket.markup.repeater.RefreshingView.onPopulate(RefreshingView.java:93)
at org.apache.wicket.markup.repeater.AbstractRepeater.onBeforeRender(AbstractRepeater.java:124)
at org.apache.wicket.markup.repeater.AbstractPageableView.onBeforeRender(AbstractPageableView.java:115)
at org.apache.wicket.Component.internalBeforeRender(Component.java:950)
at org.apache.wicket.Component.beforeRender(Component.java:1018)
at org.apache.wicket.MarkupContainer.onBeforeRenderChildren(MarkupContainer.java:1825)
... 81 more
Caused by: org.datanucleus.exceptions.NucleusException: Unable to create SQLExpression for mapping of type "org.datanucleus.store.rdbms.mapping.java.SerialisedMapping" since not supported
at org.datanucleus.store.rdbms.sql.expression.SQLExpressionFactory#newExpression(SQLExpressionFactory.java:189)
at org.datanucleus.store.rdbms.sql.expression.SQLExpressionFactory#newExpression(SQLExpressionFactory.java:155)
at org.datanucleus.store.rdbms.request.LocateBulkRequest#getStatement(LocateBulkRequest.java:158)
at org.datanucleus.store.rdbms.request.LocateBulkRequest#execute(LocateBulkRequest.java:283)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler#locateObjects(RDBMSPersistenceHandler.java:564)
at org.datanucleus.ExecutionContextImpl#findObjects(ExecutionContextImpl.java:3313)
at org.datanucleus.api.jdo.JDOPersistenceManager#getObjectsById(JDOPersistenceManager.java:1850)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession#loadPersistentPojos(PersistenceSession.java:1010)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession#adaptersFor(PersistenceSession.java:1603)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession#adaptersFor(PersistenceSession.java:1573)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel$Type$1#loadInBulk(EntityCollectionModel.java:107)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel$Type$1#load(EntityCollectionModel.java:93)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel#load(EntityCollectionModel.java:454)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel#load(EntityCollectionModel.java:70)
at org.apache.wicket.model.LoadableDetachableModel#getObject(LoadableDetachableModel.java:135)
at org.apache.isis.viewer.wicket.ui.components.collectioncontents.ajaxtable.CollectionContentsSortableDataProvider#size(CollectionContentsSortableDataProvider.java:68)
at org.apache.wicket.markup.repeater.data.DataViewBase#internalGetItemCount(DataViewBase.java:142)
at org.apache.wicket.markup.repeater.AbstractPageableView#getItemCount(AbstractPageableView.java:235)
at org.apache.wicket.markup.repeater.AbstractPageableView#getRowCount(AbstractPageableView.java:216)
at org.apache.wicket.markup.repeater.AbstractPageableView#getViewSize(AbstractPageableView.java:314)
at org.apache.wicket.markup.repeater.AbstractPageableView#getItemModels(AbstractPageableView.java:99)
at org.apache.wicket.markup.repeater.RefreshingView#onPopulate(RefreshingView.java:93)
at org.apache.wicket.markup.repeater.AbstractRepeater#onBeforeRender(AbstractRepeater.java:124)
at org.apache.wicket.markup.repeater.AbstractPageableView#onBeforeRender(AbstractPageableView.java:115)
// <-- 8 times the following lines
at org.apache.wicket.Component#internalBeforeRender(Component.java:950)
at org.apache.wicket.Component#beforeRender(Component.java:1018)
at org.apache.wicket.MarkupContainer#onBeforeRenderChildren(MarkupContainer.java:1825)
at org.apache.wicket.Component#onBeforeRender(Component.java:3916)
// -->
at org.apache.wicket.Page#onBeforeRender(Page.java:801)
at org.apache.wicket.Component#internalBeforeRender(Component.java:950)
at org.apache.wicket.Component#beforeRender(Component.java:1018)
at org.apache.wicket.Component#internalPrepareForRender(Component.java:2236)
at org.apache.wicket.Page#internalPrepareForRender(Page.java:242)
at org.apache.wicket.Component#render(Component.java:2325)
at org.apache.wicket.Page#renderPage(Page.java:1018)
at org.apache.wicket.request.handler.render.WebPageRenderer#renderPage(WebPageRenderer.java:124)
at org.apache.wicket.request.handler.render.WebPageRenderer#respond(WebPageRenderer.java:195)
at org.apache.wicket.core.request.handler.RenderPageRequestHandler#respond(RenderPageRequestHandler.java:175)
at org.apache.wicket.request.cycle.RequestCycle$HandlerExecutor#respond(RequestCycle.java:895)
at org.apache.wicket.request.RequestHandlerStack#execute(RequestHandlerStack.java:64)
at org.apache.wicket.request.cycle.RequestCycle#execute(RequestCycle.java:265)
at org.apache.wicket.request.cycle.RequestCycle#processRequest(RequestCycle.java:222)
at org.apache.wicket.request.cycle.RequestCycle#processRequestAndDetach(RequestCycle.java:293)
at org.apache.wicket.protocol.http.WicketFilter#processRequestCycle(WicketFilter.java:261)
at org.apache.wicket.protocol.http.WicketFilter#processRequest(WicketFilter.java:203)
at org.apache.wicket.protocol.http.WicketFilter#doFilter(WicketFilter.java:284)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain#doFilter(ServletHandler.java:1668)
at org.apache.isis.core.webapp.diagnostics.IsisLogOnExceptionFilter#doFilter(IsisLogOnExceptionFilter.java:52)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain#doFilter(ServletHandler.java:1668)
at org.apache.shiro.web.servlet.AbstractShiroFilter#executeChain(AbstractShiroFilter.java:449)
at org.apache.shiro.web.servlet.AbstractShiroFilter$1#call(AbstractShiroFilter.java:365)
at org.apache.shiro.subject.support.SubjectCallable#doCall(SubjectCallable.java:90)
at org.apache.shiro.subject.support.SubjectCallable#call(SubjectCallable.java:83)
at org.apache.shiro.subject.support.DelegatingSubject#execute(DelegatingSubject.java:383)
at org.apache.shiro.web.servlet.AbstractShiroFilter#doFilterInternal(AbstractShiroFilter.java:362)
at org.apache.shiro.web.servlet.OncePerRequestFilter#doFilter(OncePerRequestFilter.java:125)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain#doFilter(ServletHandler.java:1668)
// ... some Jetty stuff
at java.lang.Thread#run(Thread.java:748)
After some research, I think that the problem of question 1 and 2 seems to be related to this ISIS bug report #1902.
In short: The datanucleus extension plugin resolving mechanism does not seem to find the ISIS value type adapters and therefore cannot know how to serialize ISIS's Blob/Clob types.
According to the aforementioned ISIS bug report, this problem is fixed in 1.17.0, so I am trying to upgrade from 1.16.2 to this version (which introduced many other problems, but that will be an extra topic).
For question 3 I have found Minio which addresses basically my problem, but it is a bit oversized for my needs. I will keep looking for other solutions to store Blob/Clobs to local file system and will keep Minio in mind...
UPDATE:
I upgraded my project to ISIS version 1.17.0. It solved my problem in question 1 (now I get three columns for a Blob/Clob object).
The problem in question 2 (NucleusException) is not solved by the upgrade. I figured out that it is only thrown if returning a list of DomainObjects with Blob/Clob field(s), i.e. if rendered as standalone table. If I get directly to an object's entity view, no exception is thrown and I can see/modify/download the Blob/Clob content.
In the meantime, I wrote my own datanucleus plugin, that stores the Blobs/Clobs as files on a file system.
UPDATE 2
I found a solution for circumventing the org.datanucleus.exceptions.NucleusException: Unable to create SQLExpression for mapping of type "org.apache.isis.objectstore.jdo.datanucleus.valuetypes.IsisClobMapping" since not supported. It seems to be a problem with bulk-loading (but I do not know any details).
Deactivating bulk-load via the property isis.persistor.datanucleus.standaloneCollection.bulkLoad=false (which is initially set to true by ISIS archetypes) solved the problem.
I'm using hibernate-search-elasticsearch 5.8.2.Final and I can't figure out how to get script fields:
https://www.elastic.co/guide/en/elasticsearch/reference/5.6/search-request-script-fields.html
Is there any way to accomplish this functionality?
This is not possible in Hibernate Search 5.8.
In Hibernate Search 5.10 you could get direct access to the REST client, send a REST request to Elasticsearch and get the result as a JSON string that you would have to parse yourself, but it is very low-level and you would not benefit from the Hibernate Search search APIs at all (no query DSL, no managed entity loading, no direct translation entity type => index name, ...).
If you want better support for this feature, don't hesitate to open a ticket on our JIRA, describing in details what you are trying to achieve and how you would have expected to be able to do that. We are currently working on Search 6.0 which brings a lot of improvements, in particular when it comes to using native features of Elasticsearch, so it just might be something we could slip into our backlog.
EDIT: I forgot to mention that, while you cannot use server-side scripts, you can still get the full source from your documents, and do some parsing in your application to achieve a similar result. This will work even in Search 5.8:
FullTextEntityManager fullTextEm = Search.getFullTextEntityManager(entityManager);
FullTextQuery query = fullTextEm.createFullTextQuery(
qb.keyword()
.onField( "tags" )
.matching( "round-based" )
.createQuery(),
VideoGame.class
)
.setProjection( ElasticsearchProjectionConstants.SCORE, ElasticsearchProjectionConstants.SOURCE );
Object[] projections = (Object[]) query.getSingleResult();
for (Object projection : projections) {
float score = (float) projection[0];
String source = (String) projection[1];
}
See this section of the documentation.
Currently using the below code in morphia to have a ttl index on the document.
#Entity(value = "productDils", noClassnameStored = true)
#Indexes(
{#Index(fields = {}, options = #IndexOptions(expireAfterSeconds = 36)),
#Index(fields = {#Field("pid")}, options = #IndexOptions(unique = true))
}
)
public class ProductDils {}
But I am getting the below error.
Exception in thread "main" org.mongodb.morphia.mapping.MappingException: Could not resolve path '' against 'com.example.productdils.ProductDeils'.
Can someone please help out?
PS: I am aware as to how this is done using mongodb java client. But my application is making use of morphia.
You're not specifying the field to apply the ttl to for starters. Perhaps you just omitted the fields for brevity but you need them, of course, to define indexes against
I have created an ElasticSearch index using the ElasticSearch Java API. Now I would like to perform some aggregations on data stored in this index, but I get the following error:
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [item] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead."
As suggested at this link, to solve this issue I should enable fielddata on the "item" text field, but how can I do that using the ElasticSearch Java API?
An alternative might be mapping the "item" field as a keyword, but same question: how can I do that with the ElasticSearch Java API?
For a new index you can set the mappings at creation time by doing something like:
CreateIndexRequest createIndexRequest = new CreateIndexRequest(indexName);
String source = // probably read the mapping from a file
createIndexRequest.source(source, XContentType.JSON);
restHighLevelClient.indices().create(createIndexRequest);
The mapping should have the same format as the request you can do against the rest endpoint similar to this:
{
"mappings": {
"your_type": {
"properties": {
"your_property": {
"type": "keyword"
}
}
}
}
}
Use XContentBuilder, easy to create json string to create or update mapping. It's seem like
.startObject("field's name")
.field("type", "text")
.field("fielddata", true)
.endObject()
After just use IndexRequest to create new indices or use PutMappingRequest to update old mapping
I have recently encountered this issue but in my case, I faced this issue when I was performing the Sorting. I have posted a working solution for this problem in another similar question -> set field data = true on java elasticsearch
I'm using Apache Lucene 6.6.0 and I'm trying to extract terms from the search query. Current version of code looks like this:
Query parsedQuery = new AnalyzingQueryParser("", analyzer).parse(query);
Weight weight = parsedQuery.createWeight(searcher, false);
Set<Term> terms = new HashSet<>();
weight.extractTerms(terms);
It works pretty much fine, but recently I noticed that it doesn't support queries with wildcards (i.e. * sign). If the query contains wildcard(s), then I get an exception:
java.lang.UnsupportedOperationException: Query
id:123*456 does not implement createWeight at
org.apache.lucene.search.Query.createWeight(Query.java:66) at
org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:751)
at
org.apache.lucene.search.BooleanWeight.(BooleanWeight.java:60)
at
org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:225)
So is there a way to use createWeight() with wildcarded queries? Or maybe there's another way to extract search terms from query without createWeight()?
Long story short, it is necessary to rewrite the query, for example, as follows:
final AnalyzingQueryParser analyzingQueryParser = new AnalyzingQueryParser("", analyzer);
// TODO: The rewrite method can be overridden.
// analyzingQueryParser.setMultiTermRewriteMethod(MultiTermQuery.CONSTANT_SCORE_BOOLEAN_REWRITE);
Query parsedQuery = analyzingQueryParser.parse(query);
// Here parsedQuery is an instance of the org.apache.lucene.search.WildcardQuery class.
parsedQuery = parsedQuery.rewrite(reader);
// Here parsedQuery is an instance of the org.apache.lucene.search.MultiTermQueryConstantScoreWrapper class.
final Weight weight = parsedQuery.createWeight(searcher, false);
final Set<Term> terms = new HashSet<>();
weight.extractTerms(terms);
Please refer to the thread:
Nabble: Lucene - Java Users - How to get the terms matching a WildCardQuery in Lucene 6.2?
Mail archive: How to get the terms matching a WildCardQuery in Lucene 6.2?
for further details.
It seems the mentioned Stack Overflow question is this one: How to get matches from a wildcard Query in Lucene 6.2.