Rabbitmq user permissions in Windows - java

What should I do to make sure that my rabbitmq user has permission to run : C:\Windows\system32\cmd.exe .
In fact i want to use SSL protocole with Rabbitmq but the node crashes. Here's the sslLogfile :
=CRASH REPORT==== 4-May-2016::18:33:16 ===
crasher:
initial call: rabbit_mgmt_external_stats:init/1
pid: <0.233.0>
registered_name: rabbit_mgmt_external_stats
exception exit: {eacces,
[{erlang,open_port,
[{spawn,
"C:\\Windows\\system32\\cmd.exe /c handle.exe /accepteula -s -p 2052 2> nul"},
[stream,in,eof,hide]],
[]},
{os,cmd,1,[{file,"os.erl"},{line,204}]},
{rabbit_mgmt_external_stats,get_used_fd,1,[]},
{rabbit_mgmt_external_stats,'-infos/2-lc$^0/1-0-',2,
[]},
{rabbit_mgmt_external_stats,'-infos/2-lc$^0/1-0-',2,
[]},
{rabbit_mgmt_external_stats,emit_update,1,[]},
{rabbit_mgmt_external_stats,handle_info,2,[]},
{gen_server,handle_msg,5,
[{file,"gen_server.erl"},{line,599}]}]}
in function gen_server:terminate/6 (gen_server.erl, line 746)
ancestors: [rabbit_mgmt_agent_sup,<0.231.0>]
messages: []
links: [<0.232.0>]
dictionary: []
trap_exit: false
status: running
heap_size: 4185
stack_size: 27
reductions: 77435063
neighbours:
Here's my rabbitmq.config file :
[
{ssl, [{versions, ['tlsv1.2']}]},
{
rabbit,
[
{ssl_listeners, [5676]},
{ssl_options, [{cacertfile,"D:/Profiles/user/AppData/Roaming/RabbitMQ/testca/cacert.pem"},
{certfile, "D:/Profiles/user/AppData/Roaming/RabbitMQ/server/cert.pem"},
{keyfile, "D:/Profiles/user/AppData/Roaming/RabbitMQ/server/key.pem"},
{versions, ['tlsv1.2']},
{verify,verify_peer},
{fail_if_no_peer_cert,false}
]},
{loopback_users, []}
]
}
].

eacces is an Erlang file error:
eacces:
Missing permission for reading the file, or for searching one of the parent directories.
Set the right permissions.
Stop the RabbitMQ service and try to use rabbitmq-server.bat and execute it as administrator.
Then check the logs

Related

Why an user with the 'read' role on the database cannot list the collections?

Given a normal user ('simpleROUser', only with the 'read' role on the database), it is throwing error when attempting to list collections.
The error message is:
Exception in thread "main" com.mongodb.MongoCommandException: Command failed with error 13 (Unauthorized): 'not authorized on wmMonitoring to execute command { listCollections: 1, cursor: {}, $db: "wmMonitoring", ...' on server xxxxxxx:27001. The full response is {"operationTime": {"$timestamp": {"t": 1614169303, "i": 1}}, "ok": 0.0, "errmsg": "not authorized on wmMonitoring to execute command { listCollections: 1, cursor: {}, $db: \"wmMonitoring\", ...
However, changing only the user credentials to one with 'root' role, it works (lists all the collections under the database 'wmMonitoring'.
I've checked the 'simpleROUser' privileges, the 'listCollections' is there.
rs-dev-00:PRIMARY> grants = db.getUser( "simpleROUser", { showCredentials: true, showPrivileges: true, showAuthenticationRestrictions: true } )
rs-dev-00:PRIMARY> grants.user
simpleROUser
rs-dev-00:PRIMARY> grants.inheritedPrivileges
[
{
"resource" : {
"db" : "wmISMonitoring",
"collection" : ""
},
"actions" : [
...
"listCollections",
...
]
},
...
]
rs-dev-00:PRIMARY>
So... what am I missing?
More info:
MongoDB server: Percona distribution, v4.4.1-3
Mongo Java Driver: v4.2.1
Found th issue.
In Brazilan Portuguese, this is also referred as "dedo gordo".
There was a typo in the grant: the actual database name is wmMonitoring.
Once the database name (in the grant command) has fixed, everything else worked.

Java Azure Function Not Executing Locally

Hi all, I'm working on Azure functions, I'm new to this, I have created a local java Azure function project using the below archetype:
*mvn archetype:generate -DgroupId=com.mynew.serverlesstest -DartifactId=serverlessexample -
DarchetypeGroupId=com.microsoft.azure -DarchetypeArtifactId=azure-functions-archetype -
DinteractiveMode=false*
*The template has a simple java function.
Upon executing a "mvn clean package" command, function.json gets generated in the target folder for the function, below is my function.json*
{
"scriptFile" : "..\\serverlessexample-1.0-SNAPSHOT.jar",
"entryPoint" : "com.mynew.serverlesstest.Function.hello",
"bindings" : [ {
"type" : "httpTrigger",
"name" : "req",
"direction" : "in",
"authLevel" : "anonymous",
"methods" : [ "get", "post" ]
}, {
"type" : "http",
"name" : "$return",
"direction" : "out"
} ],
"disabled" : false
}
on doing a mvn azure-functions:run, the application starts successfully , and I get below in the command prompt:
[06-04-2020 07:26:55] Initializing function HTTP routes
[06-04-2020 07:26:55] Mapped function route 'api/hello' [get,post] to 'hello'
[06-04-2020 07:26:55] Mapped function route 'api/HttpTrigger-Java' [get,post] to 'HttpTrigger-Java'
[06-04-2020 07:26:55]
[06-04-2020 07:26:55] Host initialized (424ms)
[06-04-2020 07:26:55] Host started (433ms)
[06-04-2020 07:26:55] Job host started
Http Functions:
hello: [GET,POST] Hosting environment: Production
http://localhost:7071/api/hello
Content root path: C:\Users\ramaswamys\Development\azure-
serverless\serverlessexample\target\azure-functions\serverlessexample-20200403205054646
Now listening on: http://0.0.0.0:7071
Application started. Press Ctrl+C to shut down.
HttpTrigger-Java: [GET,POST] http://localhost:7071/api/HttpTrigger-Java
[06-04-2020 07:27:00] Host lock lease acquired by instance ID
'000000000000000000000000852CF5C4'.
But when I try hitting the api(http://localhost:7071/api/hello) from postman, I dont get any response, I see the below in the command prompt:
[06-04-2020 07:29:04] Executing HTTP request: {
[06-04-2020 07:29:04] "requestId": "af46115f-7a12-49a9-87e0-7fb073a66450",
[06-04-2020 07:29:04] "method": "GET",
[06-04-2020 07:29:04] "uri": "/api/hello"
[06-04-2020 07:29:04] }
[06-04-2020 07:29:05] Executing 'Functions.hello' (Reason='This function was programmatically called
via the host APIs.', Id=7c712cdf-332f-413f-bda2-138f9b89025b)
After this nothing happens:
after 30 min I get a timeout exception like below in the command prompt:
Microsoft.Azure.WebJobs.Host: Timeout value of 00:30:00 was exceeded by function: Functions.hello.
Can someone suggest what might be causing this, why no response is seen in the postman, Am I doing
any thing wrong here ? Am I missing any configuration stuff ? Timely help would be appreciated
Please have a check of your function code.
From the info you offer, the trigger has already be triggered. So that means the code logic inside your trigger is not completed. In other words, you successfully triggered the trigger, but you are stuck in the internal logic of the trigger. Please check your code.

cloud foundry copy routes from one app to another

Cloud Foundry is it possible to copy missing routes from one app to another while doing blue green deployment?
I have an app with few manually added routes, while doing blue green deployment (automated through script) I want to copy missing/manually added routes into new app. Is it possible?
Script:
#!/bin/bash
path="C:/Users/.../Desktop/cf_through_sh/appName.jar"
spaceName="development"
appBlue="appName"
appGreen="${appName}-dev"
manifestFile="C:/Users/.../Desktop/cf_through_sh/manifest-dev.yml"
domains=("domain1.com" "domain2.com")
appHosts=("host-v1" "host-v2")
evaluate_return_code (){
ret=$1
if [[ $ret != 0 ]]
then
exit $ret
fi
}
switch_to_target_space() {
space="development"
echo "Change space to ${space}"
cf t -s ${space}
evaluate_return_code $?
}
push_new_release() {
appGreen=$1
if [ ! -f "${manifestFile}" ]; then
echo "Missing manifest: ${manifestFile}";
exit 1;
fi
if [ ! -f "${path}" ]; then
echo "Missing artifact: ${path}";
exit 1;
fi
echo "Deploying ${path} as ${appGreen}"
cf push ${appGreen} -f ${manifestFile} -p ${path} --no-route
evaluate_return_code $?
}
map_routes() {
app=$1
domains=$2
shift
appHosts=$3
for host in ${appHosts[*]}; do
echo "Mapping ${host} to ${app}"
for domain in ${domains[*]}; do
cf map-route ${app} ${domain} -n ${host}
evaluate_return_code $?
done
done
}
unmap_routes() {
app=$1
domains=$2
shift
appHosts=$3
for host in ${appHosts[*]}; do
echo "Unmapping ${host} from ${app}"
for domain in ${domains[*]}; do
cf unmap-route ${app} ${domain} -n ${host}
evaluate_return_code $?
done
done
}
rename_app() {
oldName=$1
newName=$2
echo "Renaming ${oldName} to ${newName}"
cf rename ${oldName} ${newName}
evaluate_return_code $?
}
switch_names() {
appBlue=$1
appGreen=$2
appTemp="${appBlue}-old"
rename_app ${appBlue} ${appTemp}
rename_app ${appGreen} ${appBlue}
rename_app ${appTemp} ${appGreen}
}
stop_old_release() {
echo "Stopping old ${appGreen} app"
cf stop ${appGreen}
evaluate_return_code $?
}
switch_to_target_space ${spaceName}
push_new_release ${appGreen}
map_routes ${appGreen} ${domains[*]} ${appHosts[*]}
unmap_routes ${appBlue} ${domains[*]} ${appHosts[*]}
switch_names ${appBlue} ${appGreen}
stop_old_release
echo "DONE"
exit 0;
Eg:
appblue has 5 roues
1. host-v1.domain1.com
2. host-v2.domain1.com
3. host-v1.domain2.com
4. host-v2.domain2.com
5. manual-add.domain1.com //manually added route through admin UI
After blue green deployment through script app contains only 4 routes
1. host-v1.domain1.com
2. host-v2.domain1.com
3. host-v1.domain2.com
4. host-v2.domain2.com
How to copy missing 5th route? I don't want to pass host manual-add from script since it's added manually.
In general, is it possible to copy routes from one app to another if not mapped?
This has to be done only through Jenkins (or any CI-CD tool). What we did in our case is, we had a CF-Manifest-Template.yml and CF-Manifest-settings.json and we had a gradle task that would apply the settings from JSON and fill the Manifest-temple and generate a cf-manifest-generated.yml
The gradle file will have a task that would do blue-green-deployment by using this generated manifest file and all the routes will be hard-coded in the manifest-file. This is the standard way of doing it.
But if you want to copy route from an App running in Cloud Foundry and copy thos routes to another-app, then you would need to write a REST Client that connects to Cloud Foundry CloudController and gets all the route of App-A and then creates routes to APP-B
It is pretty simple !!
Write a REST Client that executes this command
cf app APP-A
This will bring back the details of APP-A as a JSON Response. The response would have these parameters
Showing health and status for app APP-A in org Org-A / space DEV as arun2381985#yahoo.com...
name: APP-A
requested state: started
instances: 1/1
usage: 1G x 1 instances
routes: ********
last uploaded: Sat 25 Aug 00:25:45 IST 2018
stack: cflinuxfs2
buildpack: java_buildpack
Read this JSON response and collect the Routes of APP-A and then have that mapped for APP-B .. Its pretty simple

Error after installing mod_xmlrpc in ejabberd

I am working on a java project in which i have to communicate with the ejabberd (create/delete a jabber user etc.. ) xmpp server. From the different suggestions available in the internet I understood that xml rpc one method to achieve that.
I tried to install the mod_xmlrpc as per the suggesions in this thread. Error while starting ejabberd with xml_rpc
But I had failed to start the ejabberd after configuration of mod_xmlrpc. The ejabberd log says,
=ERROR REPORT==== 2015-03-15 20:23:27 ===
C(<0.42.0>:gen_mod:75) : Problem starting the module mod_adhoc for host "example.com"
options: []
exit: {noproc,
{gen_server,call,
[ejabberd_iq_sup,
{start_child,["example.com",mod_adhoc,process_local_iq]},
infinity]}}
=ERROR REPORT==== 2015-03-15 20:23:27 ===
C(<0.42.0>:gen_mod:80) : ejabberd initialization was aborted because a module start failed.
and the erlang log says,
=CRASH REPORT==== 15-Mar-2015::20:23:27 ===
crasher:
initial call: supervisor:ejabberd_listener/1
pid: <0.270.0>
registered_name: []
exception exit: {undef,
[{ejabberd_xmlrpc,socket_type,[],[]},
{ejabberd_listener,'-bind_tcp_ports/0-fun-0-',1,
[{file,"ejabberd_listener.erl"},{line,63}]},
{lists,foreach,2,[{file,"lists.erl"},{line,1323}]},
{ejabberd_listener,init,1,
[{file,"ejabberd_listener.erl"},{line,52}]},
{supervisor,init,1,
[{file,"supervisor.erl"},{line,239}]},
{gen_server,init_it,6,
[{file,"gen_server.erl"},{line,304}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,239}]}]}
in function gen_server:init_it/6 (gen_server.erl, line 328)
ancestors: [ejabberd_sup,<0.42.0>]
messages: []
links: [#Port<0.3747>,<0.234.0>,#Port<0.3744>]
dictionary: []
trap_exit: true
status: running
heap_size: 987
stack_size: 27
reductions: 1215
neighbours:
=SUPERVISOR REPORT==== 15-Mar-2015::20:23:27 ===
Supervisor: {local,ejabberd_sup}
Context: start_error
Reason: {undef,
[{ejabberd_xmlrpc,socket_type,[],[]},
{ejabberd_listener,'-bind_tcp_ports/0-fun-0-',1,
[{file,"ejabberd_listener.erl"},{line,63}]},
{lists,foreach,2,[{file,"lists.erl"},{line,1323}]},
{ejabberd_listener,init,1,
[{file,"ejabberd_listener.erl"},{line,52}]},
{supervisor,init,1,[{file,"supervisor.erl"},{line,239}]},
{gen_server,init_it,6,
[{file,"gen_server.erl"},{line,304}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,239}]}]}
Offender: [{pid,undefined},
{name,ejabberd_listener},
{mfargs,{ejabberd_listener,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
Since I am a newbie to the ejabberd & erlang world I am finding it hard to track the root cause. Please help me to identify the root cause of this crash.
BTW is there any other method for communicating with ejabberd (create/delete a jabber user etc..) from java services ?
The erlang log says that it can't find the function socket_type with no argument in the module ejabberd_xmlrpc. As it is a valid call, and both the module and function exist in the application ejabberd, this means that something went wrong during installation (compilation fails, wrong path ???)

Failed to restore snapshot - IndexShardRestoreFailedException file not found in elasticsearch

I am using 3 node cluster setup with the elasticsearch 1.3.1, i have 17 indices each one is having min 0.5 M (1Gi) documents and 1.4 M (3 Gi) max. now i would like to try the snapshot and restore process in my cluster. i used the following REST calls to do the same...
To create a repository:
curl -XPUT 'http://host.name:9200/_snapshot/es_snapshot_repo' -d '{
"type": "fs",
"settings": {
"location": "/data/es_snapshot_bkup_repo/es_snapshot_repo"
}
}'
Verified the repository:
curl -XGET 'http://host.name:9200/_snapshot/es_snapshot_repo?pretty' the response is
{
"es_snapshot_repo" : {
"type" : "fs",
"settings" : {
"location" : "/data/es_snapshot_bkup_repo/es_snapshot_repo"
}
}
}
done the SNAPSHOT using
curl -XPUT "http://host.name:9200/_snapshot/es_snapshot_repo/snap_001" -d '{
"indices": "index_01",
"ignore_unavailable": "true",
"include_global_state": false,
"wait_for_completion": true
}'
the response is
{
"accepted": true
}
then I am trying to restore the snapshot by the request
curl -XPOST "http://host.name:9200/_snapshot/es_snapshot_repo/snap_001/_restore" -d '{
"indices": "index_01",
"ignore_unavailable": "true",
"include_global_state": false,
"rename_pattern": "index_01",
"rename_replacement": "index_01_bk",
"include_aliases": false
}'
ISSUE:
As I informed I have 3 nodes. the index which I am trying to take snapshot & restore is has 6 shards and 2 replicas.
Most of the shards and its replicas are restored properly, but sometimes 1, sometimes 2 primary shards and its replicas restoring is not happen. those primary shards are in the INITIALIZING state. I allow the cluster to relocate them for more than an hour but the shards are not relocating to the correct node... I got the following exception in my node.
the restore process trying to place the shard in the other 2 nodes... but it can't possible...
[2014-08-27 07:10:35,492][DEBUG][cluster.service ] [node_01] processing [
shard-failed (
[snap_001][4],
node[r4UoA7vJREmQfh6lz634NA],
[P],
restoring[es_snapshot_repo:snap_001],
s[INITIALIZING]),
reason [Failed to start shard,
message [IndexShardGatewayRecoveryException[[snap_001][4] failed recovery];
nested: IndexShardRestoreFailedException[[snap_001][4] restore failed];
nested: IndexShardRestoreFailedException[[snap_001][4] failed to restore snapshot [snap_001]];
nested: IndexShardRestoreFailedException[[snap_001][4] failed to read shard snapshot file];
nested: FileNotFoundException[/data/es_snapshot_bkup_repo/es_snapshot_repo/indices/index_01/4/snapshot-snap_001 (No such file or directory)]; ]]]:
done applying updated cluster_state (version: 56391)
Could anyone help me to overcome this issue and please correct me if I done any mistake in these process...
FYI I am using master node to pass the curl request
We need to provide a shared file system location which should be access by all the elasticsearch nodes with read & write permission.

Categories

Resources