I'm executing this code https://github.com/IBM/blockchain-application-using-fabric-java-sdk. When I execute CreateChannel I get this error :
Send transactions failed. Reason: timeout
I checked the log of the orderer.example.com docker container and it seems to be no communication. How could I solve this problem?
Channel create command times out when the orderer takes long enough (>5s), to respond to the transaction. You can add --timeout duration to increase the default value. I faced similar issue while creating a channel through command line - https://hyperledger-fabric.readthedocs.io/en/release-1.3/commands/peerchannel.html#peer-channel-create
You can check if java SDK provides an equivalent configuration in the channel apis for peers.
Related
Getting this kafka exception on consumer :
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'correlation_id': java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:71)
at org.apache.kafka.common.requests.ResponseHeader.parse(ResponseHeader.java:53)
at org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:435)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:265)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:134)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorKnown(AbstractCoordinator.java:184)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:886)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853)
No client-server version mismatch.
Be sure your client connect to a real Kafka port !
this specific error happens while parsing (one of?) the first header field of the expected kafka message, as shown by the invocation of ResponseHeader.java in stack-trace.
So this can occurs if you target a listening port that has nothing to do with kafka server.
just a 1 minute check !
Otherwise, you should check for a client-server version mismatch.
For me, I was having trouble with unit test failure with above exception. When I inspected the port(9092) being used on local machine, it was bound to already running process, worth checking if there is process for Kafka running locally. If you are sure you are not expecting it to be running, kill it by finding its pid.
(Don't try on production though :P )
lsof -i:9092
kill -9 <PID_FROM_ABOVE_IF_ANY>
MSMQ error while trying to access remote private queue.
Exception: Cannot open queue. (hr=unknown hr (-2147023071))
I already added these two:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSMQ\Parameters\Security\AllowNonauthenticatedRPC and set the value to 1.
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSMQ\Parameters\Security\NewRemoteReadServerAllowNoneSecurityClient and set it to 1
-2147023071 is 0x80070721 which isn't an MSMQ-specific error code (as they start 0xC00Exxxx). I believe this is a security-related error code.
As you are receiving messages from a remote queue, you are using the RPC protocol so this article will help:
Understanding how MSMQ security blocks RPC traffic
Sending a message uses the MSMQ protocol and so does not have the same problems.
I was running through the tutorial here: http://kafka.apache.org/documentation.html#introduction
When I get to "Step 7: Use Kafka Connect to import/export data" and attempt to start two connectors I am getting the following errors:
ERROR Failed to flush WorkerSourceTask{id=local-file-source-0}, timed out while waiting for producer to flush outstanding messages, 1 left
ERROR Failed to commit offsets for WorkerSourceTask
Here is the portion of the tutorial:
Next, we'll start two connectors running in standalone mode, which means they run in a single, local, dedicated process. We provide three configuration files as parameters. The first is always the configuration for the Kafka Connect process, containing common configuration such as the Kafka brokers to connect to and the serialization format for data. The remaining configuration files each specify a connector to create. These files include a unique connector name, the connector class to instantiate, and any other configuration required by the connector.
bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties
I have spent some time looking for a solution, but was unable to find anything useful. Any help is appreciated.
Thanks!
The reason I was getting this error was because the first server I created using the config/server.properties was not running. I am assuming that because it is the lead of the topic, the messages could not be flushed and the offsets could not be committed.
Once I started the kafka server using the server propertes (config/server.properties) This issue was resolved.
You need to start Kafka server and Zookeeper before running Kafka Connect.
You need to exec the cmds in "Step 2: Start the server" below first:
bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties
from here:https://mail-archives.apache.org/mod_mbox/kafka-users/201601.mbox/%3CCAK0BMEpgWmL93wgm2jVCKbUT5rAZiawzOroTFc_A6Q=GaXQgfQ#mail.gmail.com%3E
You need to start zookeeper and kafka server first before running that line.
start zookeeper
bin/zookeeper-server-start.sh config/zookeeper.properties
start multiple kafka servers
bin/kafka-server-start.sh config/server.properties
bin/kafka-server-start.sh config/server-1.properties
bin/kafka-server-start.sh config/server-2.properties
start connectors
bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties
Then you will see some lines are written into test.sink.txt:
foo
bar
And you can start the consumer to check it:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic connect-test --from-beginning
{"schema":{"type":"string","optional":false},"payload":"foo"}
{"schema":{"type":"string","optional":false},"payload":"bar"}
If you configure your Kafka Broker with a hostname such as my.sandbox.com make sure that you modify the config/connect-standalone.properties accordingly:
bootstrap.servers=my.sandbox.com:9092
On Hortonworks HDP the default port is 6667, hence the setting is
bootstrap.servers=my.sandbox.com:6667
If Kerberos is enabled you will need the following settings as well (without SSL):
security.protocol=PLAINTEXTSASL
producer.security.protocol=PLAINTEXTSASL
producer.sasl.kerberos.service.name=kafka
consumer.security.protocol=PLAINTEXTSASL
consumer.sasl.kerberos.service.name=kafka
I am trying to install a kafka & zookeeper instance on a remote server. I only need 1 node of each actually because i only want to provide remote kafka for test purposes.
Kafka and Zookeeper are running from the Apache Kafka tarball you can find there (v0.0.9), inside a Docker image.
Trying to consume / produce using the provided scripts. And trying to produce using own java application. Everythinf is working fine if Kafka & ZK are installed on the local server.
Here is the error I get while trying to produce :
BrokerPartitionInfo:83 - Error while fetching metadata [{TopicMetadata for topic RSS ->
No partition metadata for topic RSS due to kafka.common.LeaderNotAvailableException}] for topic [RSS]: class kafka.common.LeaderNotAvailableException
Kafka properties tested
First :
borker.id=0
port=9092
host.name=<external-ip>
zookeeper.connect=localhost:<PORT>
Second:
borker.id=0
port=9092
host.name=<external-ip>
zookeeper.connect=<external-ip>:<PORT>
Third:
borker.id=0
port=9092
host.name=<external-ip>
zookeeper.connect=<external-ip>:<PORT>
advertised.host.name=<external-ip>
advertised.host.port=<external-ip>
Last:
borker.id=0
port=9092
host.name=</etc/host name>
zookeeper.connect=<external-ip>:<PORT>
advertised.host.name=<external-ip>
advertised.host.port=<external-ip>
Here is my "/etc/hosts"
127.0.0.1 kafka kafka
127.0.0.1 localhost
I followed the Getting Started, which if I understood is a localhost / signle server configurations. I cannot understand what I have to do to get this work with remote calls...
Thanks for your help !
EDIT 1
host.name=localhost
advertised.host.name=politik.cm-cloud.fr
Seems to allow a local consumer (on the server) and producer. But if we want to do the same from a remote server we get
[2015-12-09 12:44:10,826] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.net.NoRouteToHostException: No route to host
The error does not look like connectivity problem with Zookeeper / Kafka.
Just follow the instruction in "quickstart" from http://kafka.apache.org/
BrokerPartitionInfo:83 - Error while fetching metadata [{TopicMetadata for topic RSS ->
Additionally the error indicates there is no partition info i.e topic not yet created . Try creating topics first and then try to produce/consume because when producing to a non existent topic kafka will create the topic based on auto.create.topics.enable in server.properties but remotely it is better to create topics rathen than relying on auto create
I have developed subscripe (topic) conncept using Camel. it is working fine in my local tomcat.but it is not working in my test environment tomcat. it is getting below mentioned error. kindly help me to resolve the issue and how to debug the issue.
is it related to server configuration ?
Error
org.apache.camel.component.jms.JmsMessageListenerContainer refreshConnectionUntilSuccessful
SEVERE: Could not refresh JMS Connection for destination 'TOPIC-NAME' - retrying in 5000 ms. Cause: JMSWMQ0018: Failed to
connect to queue manager 'QUEUE-MANAGER' with connection mode 'Client' and
host name 'HOST-NAME'.; nested exception is com.ibm.mq.MQException:
JMSCMQ0001: WebSphere MQ call failed with compcode '2' ('MQCC_FAILED')
reason '2059' ('MQRC_Q_MGR_NOT_AVAILABLE').
regards,
Gnana
There is almost no information to go on here and therefore no way to answer with any confidence. Instead, I'll provide a diagnostic process and hopefully you will find the problem. Note that in the future if you have similar issues, it would help to list the diagnostics you have already tried so that people responding can narrow down their answers.
In order for this to work, the QMgr must be running a listener, have a channel defined and available, have authorizations set up to allow the connection, and be able to resolve the queue or topic requested. With that in mind, the things I normally check and the order I check them in is as follows:
Is the QMgr running.
Is the listener running? On what port?
Can I telnet to the QMgr on the listener port? i.e. telnet mqhost 1414.
Is the channel defined? If so, is it available?
Do the sample client programs work? In this case, amqspubc is the one to try.
There are other considerations and if all of the above work, it is time to look into the client code and configuration, the versions of the client and server, authorizations, etc. But until you know that the basic configuration is in place to support a client connection (which was not indicated in the question) then these are the things to start with.