Grails: Connecting jconsole to local process on specified port - java

I am trying to connect jconsole to a specified port for a local process. I can connect to the local process using the PID but not using the remote option.
I am using ubuntu 14.04 and JDK 1.7
This is what I am doing to run my app.
grails \
-Dcom.sun.management.jmxremote=true \
-Dcom.sun.management.jmxremote.port=9999 \
-Dcom.sun.management.jmxremote.authenticate=false \
-Dcom.sun.management.jmxremote.ssl=false \
-Djava.rmi.server.hostname=xxx.xxx.xxx.xxx \
-Dserver.port=8090 \
run-app
hostname -i also gives me xxx.xxx.xxx.xxx

Grails 2.3 and later uses "forked mode" by default, where the JVM running run-app spawns a separate process to run the target application. Therefore, rather than passing the -D options to grails you should configure them in BuildConfig.groovy. Find the grails.project.fork option and add jvmArgs:
grails.project.fork = [
run:[...., jvmArgs:['-Dcom.sun.management.jmxremote=true',
'-Dcom.sun.management.jmxremote.port=9999',
// etc.
]]
]
Using the -D options on the command line as you are currently doing will set up the JMX connector in the grails process, not in your application.

Adding the below code to resources.groovy resolved the issue for me.
String serverURL = grailsApplication.config.grails.serverURL
URL url = new URL(serverURL)
System.setProperty("java.rmi.server.hostname", "${url.host}")
rmiRegistry(org.springframework.remoting.rmi.RmiRegistryFactoryBean) {
port = 9999
alwaysCreate: true
}
serverConnector(org.springframework.jmx.support.ConnectorServerFactoryBean) { bean ->
bean.dependsOn = ['rmiRegistry']
objectName = "connector:name=rmi"
serviceUrl = "service:jmx:rmi://${url.host}/jndi/rmi://${url.host}:9999/jmxrmi"
environment = ['java.rmi.server.hostname' : "${url.host}",
'jmx.remote.x.password.file' : "${grailsApplication.parentContext.getResource('/WEB-INF/jmx/jmxremote.password').file.absolutePath}",
'jmx.remote.x.access.file' : "${grailsApplication.parentContext.getResource('/WEB-INF/jmx/jmxremote.access').file.absolutePath}",
'com.sun.management.jmxremote.authenticate': true,
'com.sun.management.jmxremote.local.only' : false,
'com.sun.management.jmxremote' : true]
}

Related

Testcontainers with a company proxy

Each start of different testcontainers will throw com.github.dockerjava.api.exception.InternalServerErrorException: {"message":"Get https://quay.io/v1/_ping: dial tcp x.x.x.x: getsockopt: connection refused"}
This is no surprise (docker is behind a company proxy). How can I configure testcontainers to use a specific HTTP proxy?
Another approach could be disabling the "ping" command and using our company docker repo.
You can by specifying env variables when you are building an image or running a container. For example, below I'm building an Elasticsearch container by passing proxy configuration:
GenericContainer container = new GenericContainer("docker.elastic.co/elasticsearch/elasticsearch:6.1.1")
.withExposedPorts(9200)
.withEnv("discovery.type", "single-node")
.withEnv("HTTP_PROXY", "http://127.0.0.1:3001")
.withEnv("HTTPS_PROXY", "http://127.0.0.1:3001")
.waitingFor(Wait.forHttp("/_cat/health?v&pretty")
.forStatusCode(200));
Otherwise, you can set your proxy settings globally in docker. For windows with a docker machine you have to connect to it and the HTTP proxy in boot2docker profile.
docker-machine ssh default
sudo -s
echo "export HTTP_PROXY=http://your.proxy" >> /var/lib/boot2docker/profile
echo "export HTTPS_PROXY=http://your.proxy" >> /var/lib/boot2docker/profile
On Linux, you can create a file ~/.docker/config.json like :
{
"proxies":
{
"default":
{
"httpProxy": "http://127.0.0.1:3001",
"noProxy": "*.test.example.com,.example2.com"
}
}
}

Apache druid No known server

I am trying to setup the Apache Druid on a single machine following quickstart guide here. When I start historical server, it shows io.druid.java.util.common.IOE: No known server exception on screen.
Command:
java `cat conf-quickstart/druid/historical/jvm.config xargs` \
-cp "conf-quickstart/druid/_common:conf-quickstart/druid/historical:lib/*" \
io.druid.cli.Main server historical
Full stack-trace-
2018-04-07T18:23:40,234 WARN [main]
io.druid.java.util.common.RetryUtils - Failed on try 1, retrying in
1,246ms. io.druid.java.util.common.IOE: No known server at
io.druid.discovery.DruidLeaderClient.getCurrentKnownLeader(DruidLeaderClient.java:276)
~[druid-server-0.12.0.jar:0.12.0] at
io.druid.discovery.DruidLeaderClient.makeRequest(DruidLeaderClient.java:128)
~[druid-server-0.12.0.jar:0.12.0] at
io.druid.query.lookup.LookupReferencesManager.fetchLookupsForTier(LookupReferencesManager.java:569)
~[druid-server-0.12.0.jar:0.12.0] at
io.druid.query.lookup.LookupReferencesManager.tryGetLookupListFromCoordinator(LookupReferencesManager.java:420)
~[druid-server-0.12.0.jar:0.12.0] at
io.druid.query.lookup.LookupReferencesManager.lambda$getLookupListFromCoordinator$4(LookupReferencesManager.java:398)
~[druid-server-0.12.0.jar:0.12.0] at
io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:63)
[java-util-0.12.0.jar:0.12.0] at
io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:81)
[java-util-0.12.0.jar:0.12.0] at
io.druid.query.lookup.LookupReferencesManager.getLookupListFromCoordinator(LookupReferencesManager.java:388)
[druid-server-0.12.0.jar:0.12.0]
I have tried to setup from scratch many times with exactly the same steps mentioned on quick-start guide, but I am unable to resolve this error. How to resolve this error?
If you already tried to start druid, then delete the druid-X.Y.Z/log and druid-X.Y.Z/var folders.
Start zookeeper ./zookeeper-X.Y.Z/bin/zkServer.sh start
Recreate those folders you erased with druid-X.Y.Z/bin/init
Run each command in a new tab in this order
java `cat conf-quickstart/druid/coordinator/jvm.config | xargs` -cp "conf-quickstart/druid/_common:conf-quickstart/druid/coordinator:lib/*" io.druid.cli.Main server coordinator
java `cat conf-quickstart/druid/overlord/jvm.config | xargs` -cp "conf-quickstart/druid/_common:conf-quickstart/druid/overlord:lib/*" io.druid.cli.Main server overlord
java `cat conf-quickstart/druid/broker/jvm.config | xargs` -cp "conf-quickstart/druid/_common:conf-quickstart/druid/broker:lib/*" io.druid.cli.Main server broker
java `cat conf-quickstart/druid/historical/jvm.config | xargs` -cp "conf-quickstart/druid/_common:conf-quickstart/druid/historical:lib/*" io.druid.cli.Main server historical
java `cat conf-quickstart/druid/middleManager/jvm.config | xargs` -cp "conf-quickstart/druid/_common:conf-quickstart/druid/middleManager:lib/*" io.druid.cli.Main server middleManager
You should now have 1 tab open for each of those commands (so 5).
Insert the data curl -X 'POST' -H 'Content-Type:application/json' -d #quickstart/wikiticker-index.json localhost:8090/druid/indexer/v1/task
You will then see {"task":"index_hadoop_wikiticker_2018-06-06T19:17:51.900Z"}

Exception: Java gateway process exited before sending the driver its port number while creating a Spark Session in Python

So, I am trying to create a Spark session in Python 2.7 using the following:
#Initialize SparkSession and SparkContext
from pyspark.sql import SparkSession
from pyspark import SparkContext
#Create a Spark Session
SpSession = SparkSession \
.builder \
.master("local[2]") \
.appName("V2 Maestros") \
.config("spark.executor.memory", "1g") \
.config("spark.cores.max","2") \
.config("spark.sql.warehouse.dir", "file:///c:/temp/spark-warehouse")\
.getOrCreate()
#Get the Spark Context from Spark Session
SpContext = SpSession.sparkContext
I get the following error pointing to the python\lib\pyspark.zip\pyspark\java_gateway.pypath`
Exception: Java gateway process exited before sending the driver its port number
Tried to look into the java_gateway.py file, with the following contents:
import atexit
import os
import sys
import select
import signal
import shlex
import socket
import platform
from subprocess import Popen, PIPE
if sys.version >= '3':
xrange = range
from py4j.java_gateway import java_import, JavaGateway, GatewayClient
from py4j.java_collections import ListConverter
from pyspark.serializers import read_int
# patching ListConverter, or it will convert bytearray into Java ArrayList
def can_convert_list(self, obj):
return isinstance(obj, (list, tuple, xrange))
ListConverter.can_convert = can_convert_list
def launch_gateway():
if "PYSPARK_GATEWAY_PORT" in os.environ:
gateway_port = int(os.environ["PYSPARK_GATEWAY_PORT"])
else:
SPARK_HOME = os.environ["SPARK_HOME"]
# Launch the Py4j gateway using Spark's run command so that we pick up the
# proper classpath and settings from spark-env.sh
on_windows = platform.system() == "Windows"
script = "./bin/spark-submit.cmd" if on_windows else "./bin/spark-submit"
submit_args = os.environ.get("PYSPARK_SUBMIT_ARGS", "pyspark-shell")
if os.environ.get("SPARK_TESTING"):
submit_args = ' '.join([
"--conf spark.ui.enabled=false",
submit_args
])
command = [os.path.join(SPARK_HOME, script)] + shlex.split(submit_args)
# Start a socket that will be used by PythonGatewayServer to communicate its port to us
callback_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
callback_socket.bind(('127.0.0.1', 0))
callback_socket.listen(1)
callback_host, callback_port = callback_socket.getsockname()
env = dict(os.environ)
env['_PYSPARK_DRIVER_CALLBACK_HOST'] = callback_host
env['_PYSPARK_DRIVER_CALLBACK_PORT'] = str(callback_port)
# Launch the Java gateway.
# We open a pipe to stdin so that the Java gateway can die when the pipe is broken
if not on_windows:
# Don't send ctrl-c / SIGINT to the Java gateway:
def preexec_func():
signal.signal(signal.SIGINT, signal.SIG_IGN)
proc = Popen(command, stdin=PIPE, preexec_fn=preexec_func, env=env)
else:
# preexec_fn not supported on Windows
proc = Popen(command, stdin=PIPE, env=env)
gateway_port = None
# We use select() here in order to avoid blocking indefinitely if the subprocess dies
# before connecting
while gateway_port is None and proc.poll() is None:
timeout = 1 # (seconds)
readable, _, _ = select.select([callback_socket], [], [], timeout)
if callback_socket in readable:
gateway_connection = callback_socket.accept()[0]
# Determine which ephemeral port the server started on:
gateway_port = read_int(gateway_connection.makefile(mode="rb"))
gateway_connection.close()
callback_socket.close()
if gateway_port is None:
raise Exception("Java gateway process exited before sending the driver its port number")
# In Windows, ensure the Java child processes do not linger after Python has exited.
# In UNIX-based systems, the child process can kill itself on broken pipe (i.e. when
# the parent process' stdin sends an EOF). In Windows, however, this is not possible
# because java.lang.Process reads directly from the parent process' stdin, contending
# with any opportunity to read an EOF from the parent. Note that this is only best
# effort and will not take effect if the python process is violently terminated.
if on_windows:
# In Windows, the child process here is "spark-submit.cmd", not the JVM itself
# (because the UNIX "exec" command is not available). This means we cannot simply
# call proc.kill(), which kills only the "spark-submit.cmd" process but not the
# JVMs. Instead, we use "taskkill" with the tree-kill option "/t" to terminate all
# child processes in the tree (http://technet.microsoft.com/en-us/library/bb491009.aspx)
def killChild():
Popen(["cmd", "/c", "taskkill", "/f", "/t", "/pid", str(proc.pid)])
atexit.register(killChild)
# Connect to the gateway
gateway = JavaGateway(GatewayClient(port=gateway_port), auto_convert=True)
# Import the classes used by PySpark
java_import(gateway.jvm, "org.apache.spark.SparkConf")
java_import(gateway.jvm, "org.apache.spark.api.java.*")
java_import(gateway.jvm, "org.apache.spark.api.python.*")
java_import(gateway.jvm, "org.apache.spark.ml.python.*")
java_import(gateway.jvm, "org.apache.spark.mllib.api.python.*")
# TODO(davies): move into sql
java_import(gateway.jvm, "org.apache.spark.sql.*")
java_import(gateway.jvm, "org.apache.spark.sql.hive.*")
java_import(gateway.jvm, "scala.Tuple2")
return gateway
I am pretty new to Spark and Pyspark, hence unable to debug the issue here. I also tried to look at some other suggestions:
Spark + Python - Java gateway process exited before sending the driver its port number?
and
Pyspark: Exception: Java gateway process exited before sending the driver its port number
but unable to resolve this so far. Please help!
Here is how the spark environment looks like:
# This script loads spark-env.sh if it exists, and ensures it is only loaded once.
# spark-env.sh is loaded from SPARK_CONF_DIR if set, or within the current directory's
# conf/ subdirectory.
# Figure out where Spark is installed
if [ -z "${SPARK_HOME}" ]; then
export SPARK_HOME="$(cd "`dirname "$0"`"/..; pwd)"
fi
if [ -z "$SPARK_ENV_LOADED" ]; then
export SPARK_ENV_LOADED=1
# Returns the parent of the directory this script lives in.
parent_dir="${SPARK_HOME}"
user_conf_dir="${SPARK_CONF_DIR:-"$parent_dir"/conf}"
if [ -f "${user_conf_dir}/spark-env.sh" ]; then
# Promote all variable declarations to environment (exported) variables
set -a
. "${user_conf_dir}/spark-env.sh"
set +a
fi
fi
# Setting SPARK_SCALA_VERSION if not already set.
if [ -z "$SPARK_SCALA_VERSION" ]; then
ASSEMBLY_DIR2="${SPARK_HOME}/assembly/target/scala-2.11"
ASSEMBLY_DIR1="${SPARK_HOME}/assembly/target/scala-2.10"
if [[ -d "$ASSEMBLY_DIR2" && -d "$ASSEMBLY_DIR1" ]]; then
echo -e "Presence of build for both scala versions(SCALA 2.10 and SCALA 2.11) detected." 1>&2
echo -e 'Either clean one of them or, export SPARK_SCALA_VERSION=2.11 in spark-env.sh.' 1>&2
exit 1
fi
if [ -d "$ASSEMBLY_DIR2" ]; then
export SPARK_SCALA_VERSION="2.11"
else
export SPARK_SCALA_VERSION="2.10"
fi
fi
Here is how my Spark environment is set up in Python:
import os
import sys
# NOTE: Please change the folder paths to your current setup.
#Windows
if sys.platform.startswith('win'):
#Where you downloaded the resource bundle
os.chdir("E:/Udemy - Spark/SparkPythonDoBigDataAnalytics-Resources")
#Where you installed spark.
os.environ['SPARK_HOME'] = 'E:/Udemy - Spark/Apache Spark/spark-2.0.0-bin-hadoop2.7'
#other platforms - linux/mac
else:
os.chdir("/Users/kponnambalam/Dropbox/V2Maestros/Modules/Apache Spark/Python")
os.environ['SPARK_HOME'] = '/users/kponnambalam/products/spark-2.0.0-bin-hadoop2.7'
os.curdir
# Create a variable for our root path
SPARK_HOME = os.environ['SPARK_HOME']
# Create a variable for our root path
SPARK_HOME = os.environ['SPARK_HOME']
#Add the following paths to the system path. Please check your installation
#to make sure that these zip files actually exist. The names might change
#as versions change.
sys.path.insert(0,os.path.join(SPARK_HOME,"python"))
sys.path.insert(0,os.path.join(SPARK_HOME,"python","lib"))
sys.path.insert(0,os.path.join(SPARK_HOME,"python","lib","pyspark.zip"))
sys.path.insert(0,os.path.join(SPARK_HOME,"python","lib","py4j-0.10.1-src.zip"))
#Initialize SparkSession and SparkContext
from pyspark.sql import SparkSession
from pyspark import SparkContext
After reading many posts I finally made Spark work on my Windows laptop. I use Anaconda Python, but I am sure this will work with standard distibution too.
So, you need to make sure you can run Spark independently. My assumptions are that you have valid python path and Java installed. For Java I had "C:\ProgramData\Oracle\Java\javapath" defined in my Path which redirects to my Java8 bin folder.
Download pre-built Hadoop version of Spark from https://spark.apache.org/downloads.html and extract it, e.g. to C:\spark-2.2.0-bin-hadoop2.7
Create Environmental variable SPARK_HOME which you will need later for pyspark to pick up your local Spark installation.
Go to %SPARK_HOME%\bin and try to run pyspark which is Python Spark shell. If your environment is like mine you will see exeption about inability to find winutils and hadoop. Second exception will be about missing Hive:
pyspark.sql.utils.IllegalArgumentException: u"Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
I then found and simply followed https://jaceklaskowski.gitbooks.io/mastering-apache-spark/spark-tips-and-tricks-running-spark-windows.html
Specifically:
Download winutils, put it to c:\hadoop\bin . Create HADOOP_HOME env and add %HADOOP_HOME%\bin to PATH.
Create directory for Hive, e.g. c:\tmp\hive and run winutils.exe chmod -R 777 C:\tmp\hive in cmd in admin mode.
Then go to %SPARK_HOME%\bin and make sure when you run pyspark you see a nice following Spark logo in ASCII:
Note that sc spark context variable needs to be defined already.
Well, my main purpose was to have pyspark with auto completion in my IDE, and that's when SPARK_HOME (Step 2) comes into play. If everything is setup correctly, you should see the following lines working:
Hope that helps and you can enjoy running Spark code locally.
I have the same problem.
Luckily I found the reason.
from pyspark.sql import SparkSession
# spark = SparkSession.builder.appName('Check Pyspark').master("local").getOrCreate()
spark = SparkSession.builder.appName('CheckPyspark').master("local").getOrCreate()
print spark.sparkContext.parallelize(range(6), 3).collect()
Notice the difference between the second line and the third line.
If the parameter after the AppName like this 'Check Pyspark',you will get error(Exception: Java gateway process...).
The parameter after the AppName can not has blank space. Should chagne 'Check Pyspark' to 'CheckPyspark'.
From my "guess" this is a problem with your java version. Maybe you have two different java version installed. Also it looks like you are using code that you copy and paste from somewhere for setting the SPARK_HOMEetc.. There are many simple examples how to set up Spark. Also it looks like that you are using Windows. I would suggest to take a *NIX environment to test things as this is much easier e.g. you could use brew to install Spark. Windows is not really made for this...
I had the exact same issue after playing around with my JAVA_HOME system environmental variables on Windows 10 using python 2.7: I tried to run the same configuration script for Pyspark (Based on the V2-Maestros Udemy course) with the same error message "Java gateway process exited before sending the driver its port number".
After several attempts to fix the problem, the only solution that ended up working was to uninstall all versions of java (had three of them) from my machine, and deleting the JAVA_HOME system variable as well as the record from the PATH system variable related to JAVA_HOME; after that I performed a clean installation of Java jre V1.8.0_141, reconfigured both the JAVA_HOME and PATH entries in the system environment for Windows and restarted my machine and finally got the script to work.
Hope this helps.
Spark will not work fine with java version greater than 11, downgrade java to version 8 or 11 and set JAVA_HOME.

Docker port isn't accessible from host

I have a new Spring Boot application that I just finished and am trying to deploy it to Docker. Inside the container the application works fine. It uses ports 9000 for user facing requests and 9100 for administrative tasks like health checks. When I start a docker instance and try to access port 9000 I get the following error:
curl: (56) Recv failure: Connection reset by peer
After a lot of experimentation (via curl), I confirmed in with several different configurations that the application functions fine inside the container, but when I try to map ports to the host it doesn't connect. I've tried starting it with the following commands. None of them allow me to access the ports from the host.
docker run -P=true my-app
docker run -p 9000:9000 my-app
The workaround
The only approach that works is using the --net host option, but this doesn't allow me to run more than one container on that host.
docker run -d --net=host my-app
Experiments with ports and expose
I've used various versions of the Dockerfile exposing different ports such as 9000 and 9100 or just 9000. None of that helped. Here's my latest version:
FROM ubuntu
MAINTAINER redacted
RUN apt-get update
RUN apt-get install openjdk-7-jre-headless -y
RUN mkdir -p /opt/app
WORKDIR /opt/app
ADD ./target/oauth-authentication-1.0.0.jar /opt/app/service.jar
ADD config.properties /opt/app/config.properties
EXPOSE 9000
ENTRYPOINT java -Dext.properties.dir=/opt/app -jar /opt/app/service.jar
Hello World works
To make sure I can run a Spring Boot application, I tried Simplest-Spring-Boot-MVC-HelloWorld and it worked fine.
Netstat Results
I've used netstat to do port scans from the host and from the container:
From the host
root#my-docker-host:~# nmap 172.17.0.71 -p9000-9200
Starting Nmap 6.40 ( http://nmap.org ) at 2014-11-14 19:19 UTC Nmap
scan report for my-docker-host (172.17.0.71)
Host is up (0.0000090s latency).
Not shown: 200 closed ports
PORT STATE SERVICE
9100/tcp open jetdirect
MAC Address: F2:1A:ED:F4:07:7A (Unknown)
Nmap done: 1 IP address (1 host up) scanned in 1.48 seconds
From the container
root#80cf20c0c1fa:/opt/app# nmap 127.0.0.1 -p9000-9200
Starting Nmap 6.40 ( http://nmap.org ) at 2014-11-14 19:20 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.0000070s latency).
Not shown: 199 closed ports
PORT STATE SERVICE
9000/tcp open cslistener
9100/tcp open jetdirect
Nmap done: 1 IP address (1 host up) scanned in 2.25 seconds
The container is using Ubuntu
The hosts I've replicated this are Centos and Ubuntu.
This SO question seems similar but had very few details and no answers, so I thought I'd try to document my scenario a bit more.
I had a similar problem, in which specifying a host IP address as '127.0.0.1' wouldn't properly forward the port to the host.
Setting the web server's IP to '0.0.0.0' fixes the problem
eg - for my Node app - the following doesn't work
app.listen(3000, '127.0.0.1')
Where as the following does work:
app.listen(3000, '0.0.0.0')
Which I guess means that docker, by default, is exposing 0.0.0.0:containerPort -> local port
You should run with docker run -P to get the ports to map automatically to the same values to set in the Dockerfile.. Please see http://docs.docker.com/reference/run/#expose-incoming-ports

Munin jmx configuration

I am trying to enable JMX monitoring on Munin
I have followed the guide at:
https://github.com/munin-monitoring/contrib/tree/master/plugins/java/jmx
It tells me:
1) Files from "plugin" folder must be copied to /usr/share/munin/plugins (or another - where your munin plugins located)
2) Make sure that jmx_ executable : chmod a+x /usr/share/munin/plugins/jmx_
3) Copy configuration files that you want to use, from "examples" folder, into /usr/share/munin/plugins folder
4) create links from the /etc/munin/plugins folder to the /usr/share/munin/plugins/jmx_
The name of the link must follow wildcard pattern:
jmx_<configname>,
where configname is the name of the configuration (config filename without extension), for example:
ln -s /usr/share/munin/plugins/jmx_ /etc/munin/plugins/jmx_process_memory
I have done exatly this but whern i run ./jmx_process_memory, I just get:
Error: Could not find or load main class org.munin.plugin.jmx.memory
The actual config file is called java_process_memory.conf, so i have also tried naming the symlink jmx_java_process_memory, but get the same error.
I have had success by naming the symlink jmx_Threads as described here:
http://blog.johannes-beck.name/?p=160
I can see that org.munin.plugin.jmx.Threads is the name of a class within munin-jmx-plugins.jar, and the other classes seem to work also. But this is not what the Munin guide tells me to do, so is the documentation wrong? What is the purpose of the config files, they must be there for a reason? There are example config files for Tomcat, which is where my real interest lies, so I need to understand this. Without being able the get it working as per the guide though im a bit stuck!
Can anyone put me right on this?
Cheers
NFV
I was stuck with somehow the same issue.
What i did to get something working a little bit better but still not perfectly.
I'm on RHEL :
[root#bus|in plugins]# cat /etc/munin/plugin-conf.d/munin-node
[diskstats]
user munin
[iostat_ios]
user munin
[jmx_*]
env.ip 192.168.1.101
env.port 5054 <- being the port configured for your jmx
then
[root#bus|in plugins]# ls -l /etc/munin/plugins/jmx_MultigraphAll
lrwxrwxrwx 1 root root 29 14 mars 15:36 /etc/munin/plugins/jmx_MultigraphAll -> /usr/share/munin/plugins/jmx_
and I modified the /usr/share/munin/plugins/jmx_ with the following :
#!/bin/sh
# -*- sh -*-
: << =cut
=head1 NAME
jmx_ - Wildcard plugin to monitor Java application servers via JMX
=head1 APPLICABLE SYSTEMS
Tested with Tomcat 4.1/5.0/5.5/6.0 on Sun JVM 5/6 and OpenJDK.
Any JVM that supports JMX should in theory do.
Needs nc in path for autoconf.
=head1 CONFIGURATION
[jmx_*]
env.ip 127.0.0.1
env.port 5400
env.category jvm
env.username monitorRole
env.password SomethingSecret
env.JRE_HOME /usr/lib/jvm/java-6-sun/jre
env.JAVA_OPTS -Xmx128m
Needed configuration on the Tomcat side: add
-Dcom.sun.management.jmxremote \
-Dcom.sun.management.jmxremote.port=5400 \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.authenticate=false
to CATALINA_OPTS in your startup scripts.
Replace authenticate=false with
-Dcom.sun.management.jmxremote.password.file=/etc/tomcat/jmxremote.password \
-Dcom.sun.management.jmxremote.access.file=/etc/tomcat/jmxremote.access
...if you want authentication.
jmxremote.password:
monitorRole SomethingSecret
jmxremote.access:
monitorRole readonly
You may need higher access levels for some counters, notably ThreadsDeadlocked.
=head1 BUGS
No encryption supported in the JMX connection.
The plugins available reflect the most interesting aspects of a
JVM runtime. This should be extended to cover things specific to
Tomcat, JBoss, Glassfish and so on. Patches welcome.
=head1 AUTHORS
=encoding UTF-8
Mo Amini, Diyar Amin and Younes Hajji, Høgskolen i Oslo/Oslo
University College.
Shell script wrapper and integration by Erik Inge Bolsø, Redpill
Linpro AS.
Previous work on JMX plugin by Aleksey Studnev. Support for
authentication added by Ingvar Hagelund, Redpill Linpro AS.
=head1 LICENSE
GPLv2
=head1 MAGIC MARKERS
#%# family=auto
#%# capabilities=autoconf suggest
=cut
MUNIN_JAR="/usr/share/java/munin-jmx-plugins.jar"
if [ "x$JRE_HOME" != "x" ] ; then
JRE=$JRE_HOME/bin/java
export JRE_HOME=$JRE_HOME
fi
JAVA_BIN=${JRE:-/opt/jdk/jre/bin/java}
ip=${ip:-192.168.1.101}
port=${port:-5054}
if [ "x$1" = "xsuggest" ] ; then
echo MultigraphAll
exit 0
fi
if [ "x$1" = "xautoconf" ] ; then
NC=`which nc 2>/dev/null`
if [ "x$NC" = "x" ] ; then
echo "no (nc not found)"
exit 0
fi
$NC -n -z $ip $port >/dev/null 2>&1
CONNECT=$?
$JAVA_BIN -? >/dev/null 2>&1
JAVA=$?
if [ $JAVA -ne 0 ] ; then
echo "no (java runtime not found at $JAVA_BIN)"
exit 0
fi
if [ ! -e $MUNIN_JAR ] ; then
echo "no (munin jmx classes not found at $MUNIN_JAR)"
exit 0
fi
if [ $CONNECT -eq 0 ] ; then
echo "yes"
exit 0
else
echo "no (connection to $ip:$port failed)"
exit 0
fi
fi
if [ "x$1" = "xconfig" ] ; then
param=config
else
param=Tomcat
fi
scriptname=${0##*/}
jmxfunc=${scriptname##*_}
prefix=${scriptname%_*}
if [ "x$jmxfunc" = "x" ] ; then
echo "error, plugin must be symlinked in order to run"
exit 1
fi
ip=$ip port=$port $JAVA_BIN -cp $MUNIN_JAR $JAVA_OPTS org.munin.plugin.jmx.$jmxfunc $param $prefix
And you have to add the right permissions and owner:group on what you define as the JRE, for example here :
[root#bus|in plugins]# ls -ld /opt/jdk
drwxrwxr-x 8 nobody nobody 4096 8 oct. 15:03 /opt/jdk
Now I can run (and I can see it's using nobody:nobody as user:group, maybe something to play with in the conf) :
[root#bus|in plugins]# munin-run jmx_MultigraphAll -d
# Processing plugin configuration from /etc/munin/plugin-conf.d/df
# Processing plugin configuration from /etc/munin/plugin-conf.d/fw_
# Processing plugin configuration from /etc/munin/plugin-conf.d/hddtemp_smartctl
# Processing plugin configuration from /etc/munin/plugin-conf.d/munin-node
# Processing plugin configuration from /etc/munin/plugin-conf.d/postfix
# Processing plugin configuration from /etc/munin/plugin-conf.d/sendmail
# Setting /rgid/ruid/ to /99/99/
# Setting /egid/euid/ to /99 99/99/
# Setting up environment
# Environment ip = 192.168.1.101
# Environment port = 5054
# About to run '/etc/munin/plugins/jmx_MultigraphAll'
multigraph jmx_memory
Max.value 2162032640
Committed.value 1584332800
Init.value 1613168640
Used.value 473134248
multigraph jmx_MemoryAllocatedHeap
Max.value 1037959168
Committed.value 1037959168
Init.value 1073741824
Used.value 275414584
multigraph jmx_MemoryAllocatedNonHeap
Max.value 1124073472
Committed.value 546373632
Init.value 539426816
Used.value 197986088
[...]
multigraph jmx_ProcessorsAvailable
ProcessorsAvailable.value 1
Now I'm trying to get it to work for different JVMs on the same host, because this is for a single one.
I hope that can help you.
edit :
actually I did the modifications to use with several java processes having their own jmx ports.
What you have to add them there :
[root#bus|in plugins]# cat /etc/munin/plugin-conf.d/munin-node
[diskstats]
user munin
[iostat_ios]
user munin
[admin_jmx_*]
env.ip 192.168.1.101
env.port 5054
[managed_jmx_*]
env.ip 192.168.1.101
env.port 5055
[jboss_jmx_*]
env.ip 192.168.1.101
env.port 1616
and then create the links :
[root#bus|in plugins]# ls -l /etc/munin/plugins/*_jmx_*
lrwxrwxrwx 1 root root 29 14 mars 15:36 /etc/munin/plugins/admin_jmx_MultigraphAll -> /usr/share/munin/plugins/jmx_
lrwxrwxrwx 1 root root 29 14 mars 16:51 /etc/munin/plugins/jboss_jmx_MultigraphAll -> /usr/share/munin/plugins/jmx_
lrwxrwxrwx 1 root root 29 14 mars 16:03 /etc/munin/plugins/managed_jmx_MultigraphAll -> /usr/share/munin/plugins/jmx_
and I commented out the ip and port from the /usr/share/munin/plugins/jmx_ file, but I'm not sure it plays a role.

Categories

Resources