I am trying to start configure kafka with JMX in windows but by adding KAFKA_OPTS not working in windows. Trying below .bat. Any idea why I am not able to start kafka with JMX exporter?
SET KAFKA_HOME=D:\Pratyush\kafka\kafka_2.12-2.1.0
SET KAFKA_OPTS=-javaagent:%KAFKA_HOME%\KafkaMonitor\jmx_prometheus_javaagent-
0.11.0.jar=7071:%KAFKA_HOME%\kafka-0-8-2.yml
cd %KAFKA_HOME%\bin\windows
kafka-server-start.bat %KAFKA_HOME%\config\server.properties
Thanks in advance..!!
Related
I'm developing a Spring Boot app which is using IBM MQ. I want all of that to be configured in the docker compose. But the problem is that in the app there are used custom queues that were created from the UI in the browser, example of application.yml file:
...
ibm:
mq:
queues:
first: QUEUE1
second: QUEUE2
How do I create these queues on the startup now when I'm I want to run it from the docker compose file? When I was running ibm mq manually I was using command like this:
docker run --env LICENSE=accept --env MQ_QMGR_NAME=QM1 --publish 1414:1414 --publish 9443:9443 --detach ibmcom/mq:latest
And now I'm almost doing the same but in the docker-compose.yml file:
...
ibm-mq:
image: 'ibmcom/mq:latest'
container_name: ibm-mq
ports:
- "1414:1414"
- "9443:9443"
environment:
- LICENSE = accept
- MQ_QMGR_NAME = QM1
Is there any environment variables to create custom queues or how do I do that? I didn't find any solution to this.
Based on this information: Customizing the queue manager configuration, you can create a MQSC file named 20-config.mqsc with some config options that will be run when your queue manager is created. Just put it into the /etc/mqm directory on the image.
So, create your 20-config.mqsc file like this:
DEFINE QLOCAL(QUEUE1) REPLACE
DEFINE QLOCAL(QUEUE2) REPLACE
And map it to your docker-compose.yml as a volume like this:
ibmmq:
image: ibmcom/mq
ports:
- "1414:1414"
- "9443:9443"
environment:
- LICENSE=accept
- MQ_QMGR_NAME=QM1
volumes:
- <your 20-config.mqsc file path>:/etc/mqm/20-config.mqsc
It works for me
Chapter Customizing the queue manager configuration describes the options:
You can customize the configuration in several ways:
For getting started, you can use the default developer configuration, which is available out-of-the-box for the MQ Advanced for Developers image
By creating your own image and adding your own MQSC file into the /etc/mqm directory on the image. This file will be run when your queue manager is created.
By using remote MQ administration, via an MQ command server, the MQ HTTP APIs, or using a tool such as the MQ web console or MQ Explorer.
I was running through the tutorial here: http://kafka.apache.org/documentation.html#introduction
When I get to "Step 7: Use Kafka Connect to import/export data" and attempt to start two connectors I am getting the following errors:
ERROR Failed to flush WorkerSourceTask{id=local-file-source-0}, timed out while waiting for producer to flush outstanding messages, 1 left
ERROR Failed to commit offsets for WorkerSourceTask
Here is the portion of the tutorial:
Next, we'll start two connectors running in standalone mode, which means they run in a single, local, dedicated process. We provide three configuration files as parameters. The first is always the configuration for the Kafka Connect process, containing common configuration such as the Kafka brokers to connect to and the serialization format for data. The remaining configuration files each specify a connector to create. These files include a unique connector name, the connector class to instantiate, and any other configuration required by the connector.
bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties
I have spent some time looking for a solution, but was unable to find anything useful. Any help is appreciated.
Thanks!
The reason I was getting this error was because the first server I created using the config/server.properties was not running. I am assuming that because it is the lead of the topic, the messages could not be flushed and the offsets could not be committed.
Once I started the kafka server using the server propertes (config/server.properties) This issue was resolved.
You need to start Kafka server and Zookeeper before running Kafka Connect.
You need to exec the cmds in "Step 2: Start the server" below first:
bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties
from here:https://mail-archives.apache.org/mod_mbox/kafka-users/201601.mbox/%3CCAK0BMEpgWmL93wgm2jVCKbUT5rAZiawzOroTFc_A6Q=GaXQgfQ#mail.gmail.com%3E
You need to start zookeeper and kafka server first before running that line.
start zookeeper
bin/zookeeper-server-start.sh config/zookeeper.properties
start multiple kafka servers
bin/kafka-server-start.sh config/server.properties
bin/kafka-server-start.sh config/server-1.properties
bin/kafka-server-start.sh config/server-2.properties
start connectors
bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties
Then you will see some lines are written into test.sink.txt:
foo
bar
And you can start the consumer to check it:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic connect-test --from-beginning
{"schema":{"type":"string","optional":false},"payload":"foo"}
{"schema":{"type":"string","optional":false},"payload":"bar"}
If you configure your Kafka Broker with a hostname such as my.sandbox.com make sure that you modify the config/connect-standalone.properties accordingly:
bootstrap.servers=my.sandbox.com:9092
On Hortonworks HDP the default port is 6667, hence the setting is
bootstrap.servers=my.sandbox.com:6667
If Kerberos is enabled you will need the following settings as well (without SSL):
security.protocol=PLAINTEXTSASL
producer.security.protocol=PLAINTEXTSASL
producer.sasl.kerberos.service.name=kafka
consumer.security.protocol=PLAINTEXTSASL
consumer.sasl.kerberos.service.name=kafka
In solr wiki (https://cwiki.apache.org/confluence/display/solr/Using+JMX+with+Solr) there are informations on how to run Solr with JMX enabled, but it does not work for my windows dev box.
I'm using solr 5.3.1 and even if I start techproduct example (with bin/solr -e techproducts -Dcom.sun.management.jmxremote) instance of solr does not show up in jconsole.
I'm able to connect to solrmeter and even to elasticsearch node started locally, but not on local solr instance. The only local processes listed by JConsole is SolrMeter and elasticsearch.
Any idea of what is missing?
In the Solr PDF reference guide 6.1 page 547 (CTRL-F jconsole) I found the solution:
ENABLE_REMOTE_JMX_OPTS=true
RMI_PORT=18983
In Jconsole use chose remote process and add localhost:18983. The click the "insecure connection".
very easy vi solr-5.1.0/bin/solr.in.sh , add follow content into sh :
SOLR_HOST="192.168.1.188"
ENABLE_REMOTE_JMX_OPTS="true"
RMI_PORT=18983
restart solr , you can use jconsole connect it in windows . enjoy it !
I am trying to configure jboss-5.1.0.GA server for my application on linux platform. When I am starting the server I can see in the log that it starts successfully, but when I am trying to access the started port(8080 by default) it shows no response from server as if server not started. Can any one please help me to sort out this thing.
these are just reasonable guesses, but try the following:
start your server on a commandline with following option:
-b 0.0.0.0
try following options to check if your server is running:
http://127.0.0.1:8080/
http://localhost:8080/
http://(your ip):8080/
good luck
I've deployed some Managed Beans on WebSphere 6.1 and I've managed to invoke them through a standalone client, but when I try to use the application "jconsole" distributed with the standard JDK can can't make it works.
Has anyone achieved to connect the jconsole with WAS 6.1?
IBM WebSphere 6.1 it's supossed to support JSR 160 JavaTM Management Extensions (JMX) Remote API. Furthermore, it uses the MX4J implementation (http://mx4j.sourceforge.net). But I can't make it works with neither "jconsole" nor "MC4J".
I have the Classpath and the JAVA_HOME correctly setted, so the issue it's not there.
WebSphere's support for JMX is crap. Particularly, if you need to connect to any secured JMX beans. Here's an interesting tidbit, their own implementation of jConsole will not connect to their own JVM. I have had a PMR open with IBM for over a year to fix this issue, and have gotten nothing but the runaround. They clearly don't want to fix this issue.
The only way I have been able to invoke remote secured JMX beans hosted on WebSphere has been to implement a client using the "WebSphere application client". This is basically a stripped down app server used for stuff like this.
Open a PMR with IBM. Perhaps if more people report this issue, they will actually fix it.
Update: You can run your application as a WebSphere Application Client in RAD. Open the run menu, then choose "Run...". In the dialog that opens, towards the bottom on the left hand side, you will see "WebSphere v6.1 Application Client". I'm not sure how to start and Application Client outside of RAD.
IT WORKS !
http://issues.apache.org/jira/browse/GERONIMO-4534;jsessionid=FB20DD5973F01DD2D470FB9A1B45D209?page=com.atlassian.jira.plugin.system.issuetabpanels%3Aall-tabpanel
1) Change the config.xml and start the server.
-see here how to change config.xml: http://publib.boulder.ibm.com/wasce/V2.1.0/en/working-with-jconsole.html
2) start the jconsole with : jconsole -J-Djavax.net.ssl.keyStore=%GERONIMO_HOME%\var\security\keystores\geronimo-default -J-Djavax.net.ssl.keyStorePassword=secret -J-Djavax.net.ssl.trustStore=%GERONIMO_HOME%\var\security\keystores\geronimo-default -J-Djavax.net.ssl.trustStorePassword=secret -J-Djava.class.path=%JAVA_HOME%\lib\jconsole.jar;%JAVA_HOME%\lib\tools.jar;%GERONIMO_HOME%\repository\org\apache\geronimo\framework\geronimo-kernel\2.1.4\geronimo-kernel-2.1.4.jar
[or your version of geronimo-kernel jar]
3) in the jconsole interface->advanced, input:
JMX URL: service:jmx:rmi:///jndi/rmi://localhost:1099/JMXSecureConnector
user name: system
password: manager
4) click the connect button.
If you want the WebSphere MBeans this one works for me:
The key is to configure the classpath and the security properly.
in one line:
jconsole -J-Dwas.install.root=C:/was61 -J-Djava.ext.dirs=C:/was61/plugins;C:/was61/plugins/com.ibm.ws.security.crypto_6.1.0;C:/was61/lib;C:/was61/java/jre/lib/ext -J-Dcom.ibm.SSL.ConfigURL="file:../../properties/ssl.client.props" -J-Dcom.ibm.CORBA.ConfigURL="file:../../properties/sas.client.props" service:jmx:iiop://host:port/jndi/JMXConnector
where port = bootstrap port ex: (2809)
Be careful when setting the sas and the ssl props.
Robert
I have successfully connected to ActiveMQ and ServiceMix using the JConsole. Does WAS 6.1 use Java Management Extension (JMX) technology? JMX is required for JConsole.
If your path is set correctly it should work fine. On windows you go to System Properties -> Advanced Tab -> Environment Variables. Have your JAVA_HOME System variable set to the path of your JDK or JRE and your Path variable with %JAVA_HOME%/bin added somewhere in there. Then all you need to do is go to Start->Run->JConsole. Select the correct Process Name and your done.
Where are you having problems at? I hope this helps.
Edit:
Here is the Java Doc's on JConsole.
Hmm... I know that WebSphere is kind of hard to configure. Thats part of the reason we used ServiceMix for our ESB. Maybe its not enabled by default in WebSphere and you would have to turn it on in the config somewhere.
Websphere 6.1 does not support the JConsole for some reason even though it fully implements the JMS specs. Seems to be a week area at the moment. Your best bet is to look at the Admin client to implement you own console.
You all seem to be incorrect. I am running Websphere 6.1.041 , using JDK 1.5 , and I just started up Jconsole and used the "simple connect" tab to connect to localhost with port=0 and without a username and password and it works fine.