NotXContentException when creating ingest pipeline in Elasticsearch 5.1.2 (Solaris) - java

I am trying to create a ingest pipeline using below PUT request:
{
"description": "ContentExtractor",
"processors": [
{
"extractor": {
"field": "contentData",
"target_field": "content"
}
}
]
}
But this is resulting in following error:
{
"error": {
"root_cause": [
{
"type": "not_x_content_exception",
"reason": "Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"
}
],
"type": "not_x_content_exception",
"reason": "Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"
},
"status": 500
}
I see below exception in ES logs:
org.elasticsearch.common.compress.NotXContentException: Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes
at org.elasticsearch.common.compress.CompressorFactory.compressor(CompressorFactory.java:57) ~[elasticsearch-5.1.2.jar:5.1.2]
at org.elasticsearch.common.xcontent.XContentHelper.convertToMap(XContentHelper.java:65) ~[elasticsearch-5.1.2.jar:5.1.2]
at org.elasticsearch.ingest.PipelineStore.validatePipeline(PipelineStore.java:154) ~[elasticsearch-5.1.2.jar:5.1.2]
at org.elasticsearch.ingest.PipelineStore.put(PipelineStore.java:133) ~[elasticsearch-5.1.2.jar:5.1.2]
This problem happening when Elasticsearch is running in Solaris, same request works fine in case of Linux. What am I doing wrong? Can somebody help me to fix this issue?
Thanks in advance.

Got the exact same error message but (on different version of elasticsearch and) when querying with erroneous
data format (misinterpreted doc https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html : "request body" is expected as plain JSON -- it is not intended to explain HTTP request body)
or using old syntax within path of URL (just after 'index' in the URL) :
curl -XPUT -H "Content-Type: application/json" http://host:port/index/_mapping/_doc -d "mappings=#mymapping.json"
Just remove the "mappings=" and trailing path!

Related

Configuration for glassfish server ( proxy_read_timeout )

I have java application based on seedstack framework, it uses internally glassfish server.
I generate a xls file and export it. If the file generation completes in 60 second, it works fine. If the file generation takes more than 60 sec it error out. How to resolve this issue ? Is this related to  proxy_read_timeout ? how to set this key to increase the time ? Thanks for help.
Log
{ "#timestamp": "2023-02-10T05:35:31.671Z", "#version": "1", "message": "An I/O error has occurred while writing a response message entity to the container output stream.", "logger_name": "org.glassfish.jersey.server.ServerRuntime$Responder", "thread_name": "XNIO-1 task-4", "level": "ERROR", "level_value": 40000, "stack_trace": "org.glassfish.jersey.server.internal.process.MappableException: java.io.IOException: Broken pipe\n\tat org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:91)\n\tat org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:163)\n\tat org.glassfish.jersey.message.internal.MessageBodyFactory.writeTo(MessageBodyFactory.java:1135)\n\tat

Spring Cloud Config Server: error: No such label: master

when I access the url http://localhost:8888/actuator/health y have this error
{
"status": "DOWN",
"details": {
"diskSpace": {
"status": "UP",
"details": {
"total": 457192763392,
"free": 347865096192,
"threshold": 10485760
}
},
"refreshScope": {
"status": "UP"
},
"configServer": {
"status": "DOWN",
"details": {
"repository": {
"application": "app",
"profiles": "default"
},
"error": "org.springframework.cloud.config.server.environment.NoSuchLabelException: No such label: master"
}
}
}
}
my application.yml
enter image description here
By default, spring cloud server tries to get properties from branch "master" in the git repository. Your repository doesn't have it (you have branch "main" instead).
You can use property default-label to set custom branch name (see docs):
spring:
cloud:
config:
server:
git:
default-label: main
Or, you can rename your branch to master and leave all other things as they are.
Check your property file name and request file name.
I was getting same and resolved by making two below changes.
1st Change:
spring.cloud.config.server.git.uri=E:\\\\spring_microservice\\\\git-localconfig-repo
Removed file///...... and added 4 slash (\\)
2nd Change:
Actually i was making request with incorrect property file name. I was making request like http://localhost:8888/limit-servers/default but correct file name is limit-server.properties so i have correct the request as below.
http://localhost:8888/limit-server/default
And it worked for me.
I was getting the same issue, and resolved it by making the two below changes:
spring.cloud.config.server.git.uri=E:/spring_microservice/git-localconfig-repo
Added forward slash (/) everywhere.
You can simply add in your configuration file application.properties or apllication.yml, the following line of code :
- For people that use application.properties file :
spring.cloud.config.server.git.default-label=main
- For people that use application.use file :
spring:
cloud:
config:
server:
git:
default-label: main

Pushing mocks to remote wiremock server fails with "JSON Parsing" Error

I am trying to post mappings to a remote server from a spring application. What I found while debugging is that my JSON gets converted to "StubMapping" and this is the place where the code is failing with the following error.
Error creating bean with name 'wiremockConfig' defined in file [C:\Users\Addy\school-impl-api\target\classes\com\test\school\project\wiremock\WiremockConfig.class]: Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.test.order.implementation.product.wiremock.WiremockConfig$$EnhancerBySpringCGLIB$$b100848d]: Constructor threw exception; nested exception is com.github.tomakehurst.wiremock.common.JsonException: {
"errors" : [ {
"code" : 10,
"source" : {
"pointer" : "/mappings"
},
"title" : "Error parsing JSON",
"detail" : "Unrecognized field \"mappings\" (class com.github.tomakehurst.wiremock.stubbing.StubMapping), not marked as ignorable"
} ]
}
I got details for posting to a remote standalone server from the following issue (last comment).
https://github.com/tomakehurst/wiremock/issues/1138
My code for posting to the remote server is like this:
WireMock wm = new WireMock("https", "wiremock-poc.apps.pcf.sample.int", 443);
wm.loadMappingsFrom("src/main/resources"); // Root dir contains mappings and __files
This gets loaded when I run the profile local.
Please provide your guidance on how to solve this and move further.
Regards
Update: Sample mapping file.
{
"mappings": [
{
"request": {
"method": "GET",
"urlPathPattern": "/school/admin/rest/users/([0-9]*)?([a-zA-Z0-9_\\-\\=\\?\\.]*)"
},
"response": {
"status": 200,
"headers": {
"Content-Type": "application/json"
},
"bodyFileName": "./mockResponses/School-getUser.json"
}
}
]
}
After a discussion in chat found out that it's supported to keep each mapping in a separate file.
Here's the source code that is responsible for that: RemoteMappingsLoader#load

How to ignore Avro schema by different Confluent Registry source?

I produce the same Avro schema to one topic use different Confluent Registry sources. I get the error when I consume this topic:
org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition XXXXX_XXXX_XXX-0 at offset 0. If needed, please seek past the record to continue consumption.
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id 7
Caused by: org.apache.kafka.common.errors.SerializationException: Could not find class XXXXX_XXXX_XXX specified in writer's schema whilst finding reader's schema for a SpecificRecord.
How to ignore differently Avro message-id?
Schema:
{
"type": "record",
"name": "XXXXX_XXXX_XXX",
"namespace": "aa.bb.cc.dd",
"fields": [
{
"name": "ACTION",
"type": [
"null",
"string"
],
"default":null,
"doc":"action"
},
{
"name": "EMAIL",
"type": [
"null",
"string"
],
"default":null,
"doc":"email address"
}
]
}
Produced command
{"Action": "A", "EMAIL": "xxxx#xxx.com"}
It's not possible to use different Registry urls in a producer and be able to consume them consistently.
The reason is that a different ID will be placed in the topic.
The Schema ID lookup cannot be skipped
If you had the used same registry, the same schema payload would always generate the same ID, which the consumer would then be able to use consistently to read messages

Failed to restore snapshot - IndexShardRestoreFailedException file not found in elasticsearch

I am using 3 node cluster setup with the elasticsearch 1.3.1, i have 17 indices each one is having min 0.5 M (1Gi) documents and 1.4 M (3 Gi) max. now i would like to try the snapshot and restore process in my cluster. i used the following REST calls to do the same...
To create a repository:
curl -XPUT 'http://host.name:9200/_snapshot/es_snapshot_repo' -d '{
"type": "fs",
"settings": {
"location": "/data/es_snapshot_bkup_repo/es_snapshot_repo"
}
}'
Verified the repository:
curl -XGET 'http://host.name:9200/_snapshot/es_snapshot_repo?pretty' the response is
{
"es_snapshot_repo" : {
"type" : "fs",
"settings" : {
"location" : "/data/es_snapshot_bkup_repo/es_snapshot_repo"
}
}
}
done the SNAPSHOT using
curl -XPUT "http://host.name:9200/_snapshot/es_snapshot_repo/snap_001" -d '{
"indices": "index_01",
"ignore_unavailable": "true",
"include_global_state": false,
"wait_for_completion": true
}'
the response is
{
"accepted": true
}
then I am trying to restore the snapshot by the request
curl -XPOST "http://host.name:9200/_snapshot/es_snapshot_repo/snap_001/_restore" -d '{
"indices": "index_01",
"ignore_unavailable": "true",
"include_global_state": false,
"rename_pattern": "index_01",
"rename_replacement": "index_01_bk",
"include_aliases": false
}'
ISSUE:
As I informed I have 3 nodes. the index which I am trying to take snapshot & restore is has 6 shards and 2 replicas.
Most of the shards and its replicas are restored properly, but sometimes 1, sometimes 2 primary shards and its replicas restoring is not happen. those primary shards are in the INITIALIZING state. I allow the cluster to relocate them for more than an hour but the shards are not relocating to the correct node... I got the following exception in my node.
the restore process trying to place the shard in the other 2 nodes... but it can't possible...
[2014-08-27 07:10:35,492][DEBUG][cluster.service ] [node_01] processing [
shard-failed (
[snap_001][4],
node[r4UoA7vJREmQfh6lz634NA],
[P],
restoring[es_snapshot_repo:snap_001],
s[INITIALIZING]),
reason [Failed to start shard,
message [IndexShardGatewayRecoveryException[[snap_001][4] failed recovery];
nested: IndexShardRestoreFailedException[[snap_001][4] restore failed];
nested: IndexShardRestoreFailedException[[snap_001][4] failed to restore snapshot [snap_001]];
nested: IndexShardRestoreFailedException[[snap_001][4] failed to read shard snapshot file];
nested: FileNotFoundException[/data/es_snapshot_bkup_repo/es_snapshot_repo/indices/index_01/4/snapshot-snap_001 (No such file or directory)]; ]]]:
done applying updated cluster_state (version: 56391)
Could anyone help me to overcome this issue and please correct me if I done any mistake in these process...
FYI I am using master node to pass the curl request
We need to provide a shared file system location which should be access by all the elasticsearch nodes with read & write permission.

Categories

Resources