I am using 3 node cluster setup with the elasticsearch 1.3.1, i have 17 indices each one is having min 0.5 M (1Gi) documents and 1.4 M (3 Gi) max. now i would like to try the snapshot and restore process in my cluster. i used the following REST calls to do the same...
To create a repository:
curl -XPUT 'http://host.name:9200/_snapshot/es_snapshot_repo' -d '{
"type": "fs",
"settings": {
"location": "/data/es_snapshot_bkup_repo/es_snapshot_repo"
}
}'
Verified the repository:
curl -XGET 'http://host.name:9200/_snapshot/es_snapshot_repo?pretty' the response is
{
"es_snapshot_repo" : {
"type" : "fs",
"settings" : {
"location" : "/data/es_snapshot_bkup_repo/es_snapshot_repo"
}
}
}
done the SNAPSHOT using
curl -XPUT "http://host.name:9200/_snapshot/es_snapshot_repo/snap_001" -d '{
"indices": "index_01",
"ignore_unavailable": "true",
"include_global_state": false,
"wait_for_completion": true
}'
the response is
{
"accepted": true
}
then I am trying to restore the snapshot by the request
curl -XPOST "http://host.name:9200/_snapshot/es_snapshot_repo/snap_001/_restore" -d '{
"indices": "index_01",
"ignore_unavailable": "true",
"include_global_state": false,
"rename_pattern": "index_01",
"rename_replacement": "index_01_bk",
"include_aliases": false
}'
ISSUE:
As I informed I have 3 nodes. the index which I am trying to take snapshot & restore is has 6 shards and 2 replicas.
Most of the shards and its replicas are restored properly, but sometimes 1, sometimes 2 primary shards and its replicas restoring is not happen. those primary shards are in the INITIALIZING state. I allow the cluster to relocate them for more than an hour but the shards are not relocating to the correct node... I got the following exception in my node.
the restore process trying to place the shard in the other 2 nodes... but it can't possible...
[2014-08-27 07:10:35,492][DEBUG][cluster.service ] [node_01] processing [
shard-failed (
[snap_001][4],
node[r4UoA7vJREmQfh6lz634NA],
[P],
restoring[es_snapshot_repo:snap_001],
s[INITIALIZING]),
reason [Failed to start shard,
message [IndexShardGatewayRecoveryException[[snap_001][4] failed recovery];
nested: IndexShardRestoreFailedException[[snap_001][4] restore failed];
nested: IndexShardRestoreFailedException[[snap_001][4] failed to restore snapshot [snap_001]];
nested: IndexShardRestoreFailedException[[snap_001][4] failed to read shard snapshot file];
nested: FileNotFoundException[/data/es_snapshot_bkup_repo/es_snapshot_repo/indices/index_01/4/snapshot-snap_001 (No such file or directory)]; ]]]:
done applying updated cluster_state (version: 56391)
Could anyone help me to overcome this issue and please correct me if I done any mistake in these process...
FYI I am using master node to pass the curl request
We need to provide a shared file system location which should be access by all the elasticsearch nodes with read & write permission.
Related
Given a normal user ('simpleROUser', only with the 'read' role on the database), it is throwing error when attempting to list collections.
The error message is:
Exception in thread "main" com.mongodb.MongoCommandException: Command failed with error 13 (Unauthorized): 'not authorized on wmMonitoring to execute command { listCollections: 1, cursor: {}, $db: "wmMonitoring", ...' on server xxxxxxx:27001. The full response is {"operationTime": {"$timestamp": {"t": 1614169303, "i": 1}}, "ok": 0.0, "errmsg": "not authorized on wmMonitoring to execute command { listCollections: 1, cursor: {}, $db: \"wmMonitoring\", ...
However, changing only the user credentials to one with 'root' role, it works (lists all the collections under the database 'wmMonitoring'.
I've checked the 'simpleROUser' privileges, the 'listCollections' is there.
rs-dev-00:PRIMARY> grants = db.getUser( "simpleROUser", { showCredentials: true, showPrivileges: true, showAuthenticationRestrictions: true } )
rs-dev-00:PRIMARY> grants.user
simpleROUser
rs-dev-00:PRIMARY> grants.inheritedPrivileges
[
{
"resource" : {
"db" : "wmISMonitoring",
"collection" : ""
},
"actions" : [
...
"listCollections",
...
]
},
...
]
rs-dev-00:PRIMARY>
So... what am I missing?
More info:
MongoDB server: Percona distribution, v4.4.1-3
Mongo Java Driver: v4.2.1
Found th issue.
In Brazilan Portuguese, this is also referred as "dedo gordo".
There was a typo in the grant: the actual database name is wmMonitoring.
Once the database name (in the grant command) has fixed, everything else worked.
Hi all, I'm working on Azure functions, I'm new to this, I have created a local java Azure function project using the below archetype:
*mvn archetype:generate -DgroupId=com.mynew.serverlesstest -DartifactId=serverlessexample -
DarchetypeGroupId=com.microsoft.azure -DarchetypeArtifactId=azure-functions-archetype -
DinteractiveMode=false*
*The template has a simple java function.
Upon executing a "mvn clean package" command, function.json gets generated in the target folder for the function, below is my function.json*
{
"scriptFile" : "..\\serverlessexample-1.0-SNAPSHOT.jar",
"entryPoint" : "com.mynew.serverlesstest.Function.hello",
"bindings" : [ {
"type" : "httpTrigger",
"name" : "req",
"direction" : "in",
"authLevel" : "anonymous",
"methods" : [ "get", "post" ]
}, {
"type" : "http",
"name" : "$return",
"direction" : "out"
} ],
"disabled" : false
}
on doing a mvn azure-functions:run, the application starts successfully , and I get below in the command prompt:
[06-04-2020 07:26:55] Initializing function HTTP routes
[06-04-2020 07:26:55] Mapped function route 'api/hello' [get,post] to 'hello'
[06-04-2020 07:26:55] Mapped function route 'api/HttpTrigger-Java' [get,post] to 'HttpTrigger-Java'
[06-04-2020 07:26:55]
[06-04-2020 07:26:55] Host initialized (424ms)
[06-04-2020 07:26:55] Host started (433ms)
[06-04-2020 07:26:55] Job host started
Http Functions:
hello: [GET,POST] Hosting environment: Production
http://localhost:7071/api/hello
Content root path: C:\Users\ramaswamys\Development\azure-
serverless\serverlessexample\target\azure-functions\serverlessexample-20200403205054646
Now listening on: http://0.0.0.0:7071
Application started. Press Ctrl+C to shut down.
HttpTrigger-Java: [GET,POST] http://localhost:7071/api/HttpTrigger-Java
[06-04-2020 07:27:00] Host lock lease acquired by instance ID
'000000000000000000000000852CF5C4'.
But when I try hitting the api(http://localhost:7071/api/hello) from postman, I dont get any response, I see the below in the command prompt:
[06-04-2020 07:29:04] Executing HTTP request: {
[06-04-2020 07:29:04] "requestId": "af46115f-7a12-49a9-87e0-7fb073a66450",
[06-04-2020 07:29:04] "method": "GET",
[06-04-2020 07:29:04] "uri": "/api/hello"
[06-04-2020 07:29:04] }
[06-04-2020 07:29:05] Executing 'Functions.hello' (Reason='This function was programmatically called
via the host APIs.', Id=7c712cdf-332f-413f-bda2-138f9b89025b)
After this nothing happens:
after 30 min I get a timeout exception like below in the command prompt:
Microsoft.Azure.WebJobs.Host: Timeout value of 00:30:00 was exceeded by function: Functions.hello.
Can someone suggest what might be causing this, why no response is seen in the postman, Am I doing
any thing wrong here ? Am I missing any configuration stuff ? Timely help would be appreciated
Please have a check of your function code.
From the info you offer, the trigger has already be triggered. So that means the code logic inside your trigger is not completed. In other words, you successfully triggered the trigger, but you are stuck in the internal logic of the trigger. Please check your code.
I am trying to create a ingest pipeline using below PUT request:
{
"description": "ContentExtractor",
"processors": [
{
"extractor": {
"field": "contentData",
"target_field": "content"
}
}
]
}
But this is resulting in following error:
{
"error": {
"root_cause": [
{
"type": "not_x_content_exception",
"reason": "Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"
}
],
"type": "not_x_content_exception",
"reason": "Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"
},
"status": 500
}
I see below exception in ES logs:
org.elasticsearch.common.compress.NotXContentException: Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes
at org.elasticsearch.common.compress.CompressorFactory.compressor(CompressorFactory.java:57) ~[elasticsearch-5.1.2.jar:5.1.2]
at org.elasticsearch.common.xcontent.XContentHelper.convertToMap(XContentHelper.java:65) ~[elasticsearch-5.1.2.jar:5.1.2]
at org.elasticsearch.ingest.PipelineStore.validatePipeline(PipelineStore.java:154) ~[elasticsearch-5.1.2.jar:5.1.2]
at org.elasticsearch.ingest.PipelineStore.put(PipelineStore.java:133) ~[elasticsearch-5.1.2.jar:5.1.2]
This problem happening when Elasticsearch is running in Solaris, same request works fine in case of Linux. What am I doing wrong? Can somebody help me to fix this issue?
Thanks in advance.
Got the exact same error message but (on different version of elasticsearch and) when querying with erroneous
data format (misinterpreted doc https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html : "request body" is expected as plain JSON -- it is not intended to explain HTTP request body)
or using old syntax within path of URL (just after 'index' in the URL) :
curl -XPUT -H "Content-Type: application/json" http://host:port/index/_mapping/_doc -d "mappings=#mymapping.json"
Just remove the "mappings=" and trailing path!
What should I do to make sure that my rabbitmq user has permission to run : C:\Windows\system32\cmd.exe .
In fact i want to use SSL protocole with Rabbitmq but the node crashes. Here's the sslLogfile :
=CRASH REPORT==== 4-May-2016::18:33:16 ===
crasher:
initial call: rabbit_mgmt_external_stats:init/1
pid: <0.233.0>
registered_name: rabbit_mgmt_external_stats
exception exit: {eacces,
[{erlang,open_port,
[{spawn,
"C:\\Windows\\system32\\cmd.exe /c handle.exe /accepteula -s -p 2052 2> nul"},
[stream,in,eof,hide]],
[]},
{os,cmd,1,[{file,"os.erl"},{line,204}]},
{rabbit_mgmt_external_stats,get_used_fd,1,[]},
{rabbit_mgmt_external_stats,'-infos/2-lc$^0/1-0-',2,
[]},
{rabbit_mgmt_external_stats,'-infos/2-lc$^0/1-0-',2,
[]},
{rabbit_mgmt_external_stats,emit_update,1,[]},
{rabbit_mgmt_external_stats,handle_info,2,[]},
{gen_server,handle_msg,5,
[{file,"gen_server.erl"},{line,599}]}]}
in function gen_server:terminate/6 (gen_server.erl, line 746)
ancestors: [rabbit_mgmt_agent_sup,<0.231.0>]
messages: []
links: [<0.232.0>]
dictionary: []
trap_exit: false
status: running
heap_size: 4185
stack_size: 27
reductions: 77435063
neighbours:
Here's my rabbitmq.config file :
[
{ssl, [{versions, ['tlsv1.2']}]},
{
rabbit,
[
{ssl_listeners, [5676]},
{ssl_options, [{cacertfile,"D:/Profiles/user/AppData/Roaming/RabbitMQ/testca/cacert.pem"},
{certfile, "D:/Profiles/user/AppData/Roaming/RabbitMQ/server/cert.pem"},
{keyfile, "D:/Profiles/user/AppData/Roaming/RabbitMQ/server/key.pem"},
{versions, ['tlsv1.2']},
{verify,verify_peer},
{fail_if_no_peer_cert,false}
]},
{loopback_users, []}
]
}
].
eacces is an Erlang file error:
eacces:
Missing permission for reading the file, or for searching one of the parent directories.
Set the right permissions.
Stop the RabbitMQ service and try to use rabbitmq-server.bat and execute it as administrator.
Then check the logs
We seem to be out of ideas on how to continue troubleshooting this issue. Suddenly we see the exception listed below being hit every few minutes.
This is a 4 Mongo Shard Setup. The 4 MongoS servers are proxied using 3 HAProxies. We aren't having any visible network issues between our application and shards. The mongo logs for the shards, config servers or mongos don't show anything out of the ordinary. The haproxies seem to be doing their job just fine. Any thoughts and leads on this would be most appreciated!
{"#timestamp":"2016-02-10T05:10:23.780+00:00","#version":1,"message":"unable to process event [ Request Id: [ eb8c702c-b3e7-4605-99a7-c3dcb5a076a9 ] - Event: id [ 49e0ae8d-f16e-448c-9b28-a4af59aa2eb0 ] messageType [ bulk ] operationType: [ create ] xpoint [ 10254910941235296908401 ] ]","logger_name":"com.xyz.event.RabbitMQMessageProcessor","thread_name":"SimpleAsyncTaskExecutor-1","level":"WARN","level_value":30000,"stack_trace":"java.io.EOFException: null\n\tat org.bson.io.Bits.readFully(Bits.java:50) ~[mongo-java-driver-2.12.5.jar:na]\n\tat org.bson.io.Bits.readFully(Bits.java:35)\n\tat org.bson.io.Bits.readFully(Bits.java:30)\n\tat com.mongodb.Response.(Response.java:42)\n\tat com.mongodb.DBPort$1.execute(DBPort.java:141)\n\tat com.mongodb.DBPort$1.execute(DBPort.java:135)\n\tat com.mongodb.DBPort.doOperation(DBPort.java:164)\n\tat com.mongodb.DBPort.call(DBPort.java:135)\n\tat c.m.DBTCPConnector.innerCall(DBTCPConnector.java:289)\n\t... 56 common frames omitted\nWrapped by: c.m.MongoException$Network: Read operation to server prod_mongos.internal.xyz.com:27017 failed on database xyz\n\tat c.m.DBTCPConnector.innerCall(DBTCPConnector.java:297) ~[mongo-java-driver-2.12.5.jar:na]\n\tat c.m.DBTCPConnector.call(DBTCPConnector.java:268)\n\tat c.m.DBCollectionImpl.find(DBCollectionImpl.java:84)\n\tat c.m.DBCollectionImpl.find(DBCollectionImpl.java:66)\n\tat c.m.DBCollection.findOne(DBCollection.java:869)\n\tat c.m.DBCollection.findOne(DBCollection.java:843)\n\tat c.m.DBCollection.findOne(DBCollection.java:789)\n\tat o.s.d.m.c.MongoTemplate$FindOneCallback.doInCollection(MongoTemplate.java:2013) ~[spring-data-mongodb-1.6.3.RELEASE.jar:na]\n\tat o.s.d.m.c.MongoTemplate$FindOneCallback.doInCollection(MongoTemplate.java:1997)\n\tat o.s.d.m.c.MongoTemplate.executeFindOneInternal(MongoTemplate.java:1772)\n\t... 47 common frames omitted\nWrapped by: o.s.d.DataAccessResourceFailureException: Read operation to server prod_mongos.internal.xyz.com:27017 failed on database xyz; nested exception is com.mongodb.MongoException$Network: Read operation to server prod_mongos.internal.xyz.com:27017 failed on database xyz\n\tat o.s.d.m.c.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:59) ~[spring-data-mongodb-1.6.3.RELEASE.jar:na]\n\tat o.s.d.m.c.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:1946)\n\tat o.s.d.m.c.MongoTemplate.executeFindOneInternal(MongoTemplate.java:1776)\n\tat o.s.d.m.c.MongoTemplate.doFindOne(MongoTemplate.j...","HOSTNAME":"prod-node-09","requestId":"eb8c702c-b3e7-4605-99a7-c3dcb5a076a9","WHAT":"ProcessBulkDiscoveredXYZEvent","host":"172.30.31.155:44243","type":"cloud_service","tags":["_grokparsefailure"]}