I am upgrading this plugin: https://github.com/meltwater/elasticsearch-analysis-combo/tree/f3d4d365881416355e935afb966386a40325a53c
from ES 2.1.1 to ES 2.2.0 . I made the required changes in my plugin and installed it. Now when I run any request on ES , it throws NoClassDefFoundError for a class which is present in one of the JARs in the plugin.
I used these settings to create my index:
{
"index" : {
"analysis" : {
"analyzer" : {
"default" : {
"type" : "custom",
"tokenizer" : "standard",
"filter" : [ "snowball", "lowercase" ]
},
"combo" : {
"type" : "combo",
"sub_analyzers" : [ "standard", "default" ]
}
},
"filter" : {
"snowball" : {
"type" : "snowball",
"language" : "english"
}
}
}
}
}
This is the request I am sending:
localhost:9200/testindex/_analyze?analyzer=combo&text=algorithm
and this is the response:
{
"error": {
"root_cause": [
{
"type": "remote_transport_exception",
"reason": "[Edward \"Ned\" Buckman][127.0.0.1:9300][indices:admin/analyze[s]]"
}
],
"type": "no_class_def_found_error",
"reason": "Could not initialize class org.apache.lucene.util.ReaderCloneFactory"
},
"status": 500
}
ES console logs after executing above request:
RemoteTransportException[[Edward "Ned" Buckman][127.0.0.1:9300][indices:admin/analyze[s]]]; nested: NoClassDefFoundError[Could not initialize class org.apache.lucene.util.ReaderCloneFactory];
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.lucene.util.ReaderCloneFactory
at org.apache.lucene.analysis.ComboAnalyzer$CombiningTokenStreamComponents.createTokenStreams(ComboAnalyzer.java:204)
at org.apache.lucene.analysis.ComboAnalyzer$CombiningTokenStreamComponents.getTokenStream(ComboAnalyzer.java:195)
at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:182)
at org.elasticsearch.action.admin.indices.analyze.TransportAnalyzeAction.simpleAnalyze(TransportAnalyzeAction.java:240)
at org.elasticsearch.action.admin.indices.analyze.TransportAnalyzeAction.shardOperation(TransportAnalyzeAction.java:225)
at org.elasticsearch.action.admin.indices.analyze.TransportAnalyzeAction.shardOperation(TransportAnalyzeAction.java:63)
at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$ShardTransportHandler.messageReceived(TransportSingleShardAction.java:282)
at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$ShardTransportHandler.messageReceived(TransportSingleShardAction.java:275)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Related
We are starting qpid broker from java code. Library used is qpid-broker-core, qpid-broker-plugins-amqp-1-0-protocol, qpid-broker-plugins-management-http.
Map<String, Object> attributes = new HashMap<>();
attributes.put("type", "Memory");
attributes.put("qpid.broker.defaultPreferenceStoreAttributes", "{\"type\": \"Noop\"}");
String resourcePath = findResourcePath("initial-config.json");
attributes.put("initialConfigurationLocation", resourcePath);
attributes.put("startupLoggedToSystemOut", "false");
System.setProperty("qpid.tests.mms.messagestore.persistence", "true");
System.setProperty("qpid.amqp_port", port);
System.setProperty("qpid.http_port", hport);
try {
URL.setURLStreamHandlerFactory(protocol -> ("classpath".equals(protocol) ? new Handler() : null));
} catch (final Error ignored) {
// Java is ridiculous and doesn't allow setting the factory if it's already been set
}
try {
LOGGER.info("*** Starting QPID Broker....");
broker.startup(attributes);
LOGGER.info("*** QPID Broker started.");
}
We can see debug log is enabled. All startup logs are getting printed in console. How to change log level to WARN.
Initial config json looks like
{
"name": "EmbeddedBroker",
"modelVersion": "8.0",
"authenticationproviders": [
{
"name": "anonymous",
"type": "Anonymous"
}
],
"ports": [
{
"name": "AMQP",
"bindingAddress": "localhost",
"port": "${qpid.amqp_port}",
"protocols": [ "AMQP_1_0" ],
"authenticationProvider": "anonymous",
"virtualhostaliases" : [ {
"name" : "nameAlias",
"type" : "nameAlias"
}, {
"name" : "defaultAlias",
"type" : "defaultAlias"
}, {
"name" : "hostnameAlias",
"type" : "hostnameAlias"
} ]
},
{
"name" : "HTTP",
"port" : "${qpid.http_port}",
"protocols" : [ "HTTP" ],
"authenticationProvider" : "anonymous"
}
],
"virtualhostnodes": [
{
"name": "default",
"defaultVirtualHostNode": "true",
"type": "Memory",
"virtualHostInitialConfiguration": "{\"type\": \"Memory\" }"
}
],
"plugins" : [
{
"type" : "MANAGEMENT-HTTP",
"name" : "httpManagement"
}
]
}
Tried adding brokerloggers in initial config json. but not working.
In config.json log level is defined by field "brokerloginclusionrules":
"brokerloggers" : [ {
"name" : "logfile",
"type" : "File",
"fileName" : "${qpid.work_dir}${file.separator}log${file.separator}qpid.log",
"brokerloginclusionrules" : [ {
"name" : "Root",
"type" : "NameAndLevel",
"level" : "WARN",
"loggerName" : "ROOT"
}, {
"name" : "Qpid",
"type" : "NameAndLevel",
"level" : "INFO",
"loggerName" : "org.apache.qpid.*"
}, {
"name" : "Operational",
"type" : "NameAndLevel",
"level" : "INFO",
"loggerName" : "qpid.message.*"
}, {
"name" : "Statistics",
"type" : "NameAndLevel",
"level" : "INFO",
"loggerName" : "qpid.statistics.*"
} ]
}
]
See documentation for complete example.
You could also read and update log level in runtime using broker-j REST API.
E.g. this curl command will return the list of broker loggers:
curl http://<USERNAME>:<PASSWORD>#<HOSTNAME>:<PORT>/api/latest/brokerlogger
This curl command will return the list of broker log inclusion rules:
curl http://<USERNAME>:<PASSWORD>#<HOSTNAME>:<PORT>/api/latest/brokerinclusionrule
This curl command will change log level of a log inclusion rule specified:
curl --data '{"level": "INFO"}' http://<USERNAME>:<PASSWORD>#<HOSTNAME>:<PORT>/api/latest/brokerinclusionrule/<BROKER_LOGGER_NAME>/<BROKER_LOG_INCLUSION_RULE_NAME>
I have a complex index with a ngram analyzer. I want to be able to create a new index through the Java API. I am currently using Kotlin for this but using the same framework. I have created the schema for this index as so:
{
"settings": {
"index": {
"max_ngram_diff": 20,
"search.idle.after": "10m"
},
"analysis": {
"analyzer": {
"ngram3_analyzer": {
"tokenizer": "ngram3_tokenizer",
"filter": [
"lowercase"
]
}
},
"tokenizer": {
"ngram3_tokenizer": {
"type": "ngram",
"min_gram": 3,
"max_gram": 20
}
}
}
},
"mappings": {
"dynamic": "strict",
"_doc": {
"properties": {
"name": {
"type": "keyword",
"fields": {
"partial": {
"type": "text",
"analyzer": "ngram3_analyzer",
"search_analyzer": "keyword"
},
"text": {
"type": "text"
}
}
},
"location": {
"type": "geo_shape",
"ignore_malformed": true
},
"type": {
"type": "keyword"
},
"sort": {
"type": "integer"
}
}
}
}
}
This json schema works when manually passing it via a rest client PUT call.
{
"acknowledged": true,
"shards_acknowledged": true,
"index": "new_index_created"
}
Passing the same schema via elastic java API using the following koltin function:
private fun createIndex(index: String, schema: String) {
val createIndexRequest = CreateIndexRequest(index).mapping(schema, XContentType.JSON)
getClient().indices().create(createIndexRequest, RequestOptions.DEFAULT)
}
I get this response:
Elasticsearch exception [type=mapper_parsing_exception, reason=Failed to parse mapping [_doc]: Root mapping definition has unsupported parameters: [settings : {index={max_ngram_diff=20, search.idle.after=10m}, analysis={analyzer={ngram3_analyzer={filter=[lowercase], tokenizer=ngram3_tokenizer}}, tokenizer={ngram3_tokenizer={min_gram=3, type=ngram, max_gram=20}}}}] [mappings : {_doc={properties={name={type=keyword, fields={text={type=text}, partial={search_analyzer=keyword, analyzer=ngram3_analyzer, type=text}}}, location={ignore_malformed=true, type=geo_shape}, sort={type=integer}, type={type=keyword}}}, dynamic=strict}]]
any help on this issue would be great :)
The error you get is because you're passing both mappings and settings into the mapping(...) call.
You can either call mapping() with only the mappings section and setting() with the settings section, or you can call source() like this:
val createIndexRequest = CreateIndexRequest(index).source(schema, XContentType.JSON)
^
|
change this
I need to implement stemmer search, I've found this link on elasticsearch documentation. There is json that I've had send to elascticsearch server. But I new in elasticsearch and cannot figure out how to implements this in java. I cannot also find any examples. Ccould you please help me with this?
I've add setting with
PUT /data
{
"settings": {
"analysis" : {
"analyzer" : {
"my_analyzer" : {
"tokenizer" : "standard",
"filter" : ["standard", "lowercase", "my_stemmer"]
}
},
"filter" : {
"my_stemmer" : {
"type" : "stemmer",
"name" : "english"
}
}
}
}
}
after that I am trying to find 'skis' with query:
GET data/_search
{
"query": {
"simple_query_string": {
"fields": [ "value36" ],
"query": "ski"
}
}
}
but result is empty
I have a JSD named SampleRequestMessage.jsd . In this jsd i have a reference to another jsd SampleRequestMessageProperties.jsd as shown below
{
"$schema": "http://json-schema.org/draft-04/schema#",
"javaName": "SampleConfigureNodeRequestMessage",
"description": "This message comes from sample-paqx and gets translated into Southbound version of this message",
"_meta": {
"message":"com.dell.cpsd.sample.configure.node.request",
"version":"1.0"
},
"type" : "object",
"id" : "**SampleRequestMessage.jsd**",
"properties" : {
"messageProperties" : {
"type" : "object",
"$ref" : "**SampleRequestMessageProperties.jsd**"
},
"endpointURL" : {
"type" : "string"
},
"userName" : {
"type" : "string"
},
"password" : {
"type" : "string"
}
},
"required":[
"messageProperties",
"endpointURL",
"userName",
"password"
]
}
I want the Schema object of this JSD so that I can validate it against a JSON. Now how can I load all the references of the Parent JSD.In this case it is SampleRequestMessageProperties.jsd. This JSD is pulled from one of the dependency jars. I may have to pull the referenced JSDs from multiple folders and create a Schema object for parent JSD. How can I do this? Please help
You could do it like this:
{
"$schema": "http://json-schema.org/draft-04/schema#",
"javaName": "SampleConfigureNodeRequestMessage",
"description": "This message comes from sample-paqx and gets translated into Southbound version of this message",
"_meta": {
"message":"com.dell.cpsd.sample.configure.node.request",
"version":"1.0"
},"definitions": {
"SampleRequestMessage": {
"type": "object",
"properties": {
"test": { "type": "string" }
},
"required": ["test"]
}
},
"type" : "object",
"properties" : {
"messageProperties" : {"$ref": "#/definitions/SampleRequestMessage"
},
"endpointURL" : {
"type" : "string"
},
"userName" : {
"type" : "string"
},
"password" : {
"type" : "string"
}
},
"required":[
"messageProperties",
"endpointURL",
"userName",
"password"
]
}
This would validate the following json.
{
"messageProperties": {"test": "hello"},
"endpointURL": "test.com",
"userName": "test",
"password": "secret"
}
}
The definitions can also be in a external file. For more infos: refer json schmea
Hope this helps
I am using Java Springdata elasticsearch and I want to use sub-aggregation and model the following query.
{
"from" : 0,
"size" : 10,
"sort" : [ {
"_score" : {
"order" : "desc"
}
} ],
"aggregations" : {
"parentAgg" : {
"terms" : {
"field" : "parentField",
"size" : 0
},
"aggregations" : {
"childAgg" : {
"terms" : {
"field" : "childField"
}
}
}
}
}
}
Currently I have used subaggregation (i.e. Aggregation.subAggregation(subAggName)) however output I get is -
"aggregations": [
{
"field": "parentAgg",
"values": [
{
"term": "val1",
"docCount": 2
},
{
"term": "val2",
"docCount": 2
},
{
"term": "val3",
"docCount": 1
}
]
}
]
Relavent Java Code -
for (Object aggregationField : request.getAggregationFields()) {
TermsBuilder termBuilder = AggregationBuilders.terms(aggregationField.toString())
.field(aggregationField.toString()).size(0);
if(aggregationField.toString().equals("parentField"))
{
TermsBuilder childBuilder = AggregationBuilders.terms("childAgg").field("childField").size(0);
termBuilder.subAggregation(childBuilder);
}
nativeSearchQueryBuilder.addAggregation(termBuilder);
}
Can you please let me know what I am missing?