When I try to test STORM using yahoo streaming-benchmark I get these errors. I tried changing the port to 2080 instead of default port 2181 in ZooKeeper "zoo.cfg" file and kafka "server.properties". Still I get the same error. Any help would be much appreciated. Thanks in advance. :-)
2792 [main] INFO o.a.s.s.o.a.z.s.NIOServerCnxnFactory - binding to port 0.0.0.0/0.0.0.0:2181
2796 [main] ERROR o.a.s.s.o.a.z.s.NIOServerCnxnFactory - Thread Thread[main,5,main] died
java.lang.RuntimeException: No port is available to launch an inprocess zookeeper.
at org.apache.storm.zookeeper$mk_inprocess_zookeeper$fn__2124$fn__2126.invoke(zookeeper.clj:223) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.zookeeper$mk_inprocess_zookeeper$fn__2124.invoke(zookeeper.clj:219) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.zookeeper$mk_inprocess_zookeeper.doInvoke(zookeeper.clj:217) ~[storm-core-1.0.1.jar:1.0.1]
at clojure.lang.RestFn.invoke(RestFn.java:439) ~[clojure-1.7.0.jar:?]
at org.apache.storm.command.dev_zookeeper$_main.doInvoke(dev_zookeeper.clj:25) ~[storm-core-1.0.1.jar:1.0.1]
at clojure.lang.RestFn.invoke(RestFn.java:397) ~[clojure-1.7.0.jar:?]
at clojure.lang.AFn.applyToHelper(AFn.java:152) ~[clojure-1.7.0.jar:?]
at clojure.lang.RestFn.applyTo(RestFn.java:132) ~[clojure-1.7.0.jar:?]
at org.apache.storm.command.dev_zookeeper.main(Unknown Source) ~[storm-core-1.0.1.jar:1.0.1]
Redis is already running...
WARNING: send already refers to: #'clojure.core/send in namespace: setup.core, being replaced by: #'clj-kafka.new.producer/send
{:redis-host localhost, :kakfa-brokers localhost:9092}
Writing campaigns data to Redis.
Error: Could not find or load main class .home.eranga.Software.kafka-0.10.0.1.config.server.properties
Unrecognized option: --create
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
The answer is described in first and second comments by myself.
Related
Trying to install confluent platform in my macOS (11.2.3)
But the connect can not start. I using java 8 update 281.
Followed the suggestions of this stackoverflow answer but no luck.
Starting ZooKeeper
ZooKeeper is [UP]
Starting Kafka
Kafka is [UP]
Starting Schema Registry
Schema Registry is [UP]
Starting Kafka REST
Kafka REST is [UP]
Starting Connect
Error: Connect failed to start
error log in connect.log:
[2021-04-05 23:47:40,486] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed:85)
java.lang.ExceptionInInitializerError
at javax.crypto.KeyGenerator.nextSpi(KeyGenerator.java:340)
at javax.crypto.KeyGenerator.<init>(KeyGenerator.java:168)
at javax.crypto.KeyGenerator.getInstance(KeyGenerator.java:223)
at org.apache.kafka.connect.runtime.distributed.DistributedConfig.validateKeyAlgorithm(DistributedConfig.java:502)
at org.apache.kafka.connect.runtime.distributed.DistributedConfig.lambda$configDef$2(DistributedConfig.java:375)
at org.apache.kafka.common.config.ConfigDef$LambdaValidator.ensureValid(ConfigDef.java:1038)
at org.apache.kafka.common.config.ConfigDef$ConfigKey.<init>(ConfigDef.java:1159)
at org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:152)
at org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:172)
at org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:211)
at org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:373)
at org.apache.kafka.connect.runtime.distributed.DistributedConfig.configDef(DistributedConfig.java:371)
at org.apache.kafka.connect.runtime.distributed.DistributedConfig.<clinit>(DistributedConfig.java:196)
at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:94)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:79)
Caused by: java.lang.SecurityException: Can not initialize cryptographic mechanism
at javax.crypto.JceSecurity.<clinit>(JceSecurity.java:93)
... 15 more
Caused by: java.lang.SecurityException: The jurisdiction policy files are not signed by the expected signer! (Policy files are specific per major JDK release.Ensure the correct version is installed.)
at javax.crypto.JarVerifier.verifyPolicySigned(JarVerifier.java:336)
at javax.crypto.JceSecurity.loadPolicies(JceSecurity.java:378)
at javax.crypto.JceSecurity.setupJurisdictionPolicies(JceSecurity.java:323)
at javax.crypto.JceSecurity.access$000(JceSecurity.java:50)
at javax.crypto.JceSecurity$1.run(JceSecurity.java:85)
at java.security.AccessController.doPrivileged(Native Method)
at javax.crypto.JceSecurity.<clinit>(JceSecurity.java:82)
... 15 more
I am getting below error while start the WAS 8.5 server.
ADMU0116I: Tool information is being logged in file
C:\Ibm_WAS8.5\profiles\June\logs\server1\startServer.log
ADMU0128I: Starting tool with the June profile
ADMU3100I: Reading configuration for server: server1
ADMU0111E: Program exiting with error:
com.ibm.websphere.management.exception.AdminException
ADMU1211I: To obtain a full trace of the failure, use the -trace option.
ADMU0211I: Error details may be seen in the file:
C:\Ibm_WAS8.5\profiles\June\logs\server1\startServer.log
Please refer the given below exception from startServer.log
[5/15/18 18:11:32:012 EDT] 00000001 AdminTool A ADMU0111E: Program exiting with error: com.ibm.websphere.management.exception.AdminException
at com.ibm.ws.management.launcher.DefaultLaunchPlatformHelper.getDefaultBootclasspath(DefaultLaunchPlatformHelper.java:121)
at com.ibm.ws.management.launcher.LaunchCommand.processBootstrapClasspathInfo(LaunchCommand.java:1689)
at com.ibm.ws.management.launcher.LaunchCommand.setParamsFromJavaProcessDef(LaunchCommand.java:1227)
at com.ibm.ws.management.launcher.LaunchCommand.setParamsFromProcessDef(LaunchCommand.java:632)
at com.ibm.ws.management.launcher.LaunchCommand.init(LaunchCommand.java:376)
at com.ibm.ws.management.launcher.LaunchCommand.<init>(LaunchCommand.java:270)
at com.ibm.ws.management.tools.WsServerLauncher.initializeRepositoryAndLauncher(WsServerLauncher.java:424)
at com.ibm.ws.management.tools.WsServerLauncher.runTool(WsServerLauncher.java:279)
at com.ibm.ws.management.tools.AdminTool.executeUtility(AdminTool.java:269)
at com.ibm.ws.management.tools.WsServerController.executeUtilityOnWindows(WsServerController.java:136)
at com.ibm.ws.management.tools.WsServerLauncher.main(WsServerLauncher.java:140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:611)
at com.ibm.ws.bootstrap.WSLauncher.main(WSLauncher.java:280)
I got solution from the following links.
listout-sdk and
commands
cd /cydrive/c/IBM_8.5.5/WebSphere/AppServer/bin
./managesdk.bat -listAvailable
./managesdk.bat -enableProfileAll -sdkName 1.6_64
./managesdk.bat -setCommandDefault -sdkName 1.6_64
./managesdk.bat -setNewProfileDefault -sdkName 1.6_64
Restart the server.
I am running storm topology on my cluster and I could find that the topology is submitted successfully. But it is not reading messages from Kafka. And when I checked the Topology logs in storm I could find that their is the issue with stormconf.ser file.
it is not able to find this file in the path like - /archive/hadoop/storm/supervisor/stormdist/FieldsGrouping_Topology-2-1441255014/stormconf.ser
Here is the full Stack Trace ---
2015-09-03 00:39:06 b.s.d.worker [ERROR] Error on initialization of server mk-worker
java.io.FileNotFoundException: File '/archive/hadoop/storm/supervisor/stormdist/FieldsGrouping_Topology-2-1441255014/stormconf.ser' does not exist
at org.apache.commons.io.FileUtils.openInputStream(FileUtils.java:299) ~[commons-io-2.4.jar:2.4]
at org.apache.commons.io.FileUtils.readFileToByteArray(FileUtils.java:1763) ~[commons-io-2.4.jar:2.4]
at backtype.storm.config$read_supervisor_storm_conf.invoke(config.clj:214) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.worker$fn__6019$exec_fn__1142__auto____6020.invoke(worker.clj:382) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at clojure.lang.AFn.applyToHelper(AFn.java:185) [clojure-1.5.1.jar:na]
at clojure.lang.AFn.applyTo(AFn.java:151) [clojure-1.5.1.jar:na]
at clojure.core$apply.invoke(core.clj:617) ~[clojure-1.5.1.jar:na]
Please help me on this.....
Thanks.
I am using spark 1.3.0 with hbase 1.0. After one week. Hbase run successful with java code. But when using Hbase with spark giving error. I also check with hbase shell is work fine. This errors occurred after long time otherwise work fine with spark also.
I already check hadoop and hbase cluster health is fine.
at spark UI
Caused by: java.io.IOException: Enable/Disable failed
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.isTableOnlineState(ZooKeeperRegistry.java:110)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isTableDisabled(ConnectionManager.java:907)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.relocateRegion(ConnectionManager.java:1076)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.relocateRegion(ConnectionManager.java:1064)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getRegionLocation(ConnectionManager.java:885)
at org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:78)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
... 23 more
Caused by:org.apache.zookeeper.KeeperException$ConnectionLossException:KeeperErrorCode = ConnectionLoss for /hbase/table/my_sample_table
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1151)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:360)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:685)
at org.apache.hadoop.hbase.zookeeper.ZKTableStateClientSideReader.getTableState(ZKTableStateClientSideReader.java:186)
at org.apache.hadoop.hbase.zookeeper.ZKTableStateClientSideReader.isDisabledTable(ZKTableStateClientSideReader.java:60)
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.isTableOnlineState(ZooKeeperRegistry.java:108)
... 29 more
In some cases
WARN ZKUtil:484 - hconnection-0x792dd4040x0, quorum=Megatron:2222, baseZNode=/hbase Unable to set watcher on znode (/hbase/hbaseid)
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:222)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:481)
at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65)
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getCat org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
clusterId(ZooKeeperRegistry.java:86)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:833)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:623)
at sun.reflect.GeneratedConstructorAccessor530.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)
at org.apache.hadoop.hbase.mapreduce.TableInputFormat.initialize(TableInputFormat.java:183)
at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:230)
at org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:237)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:95)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1511)
at org.apache.spark.rdd.RDD.count(RDD.scala:1006)
at org.apache.spark.api.java.JavaRDDLike$class.count(JavaRDDLike.scala:412)
at org.apache.spark.api.java.JavaRDD.count(JavaRDD.scala:32)
But error occurred first when try to put something into hbase.
Second error when try to read from hbase.
The problem is with the HBaese Table locking. The is in hung up or limbo state.
There are some suggested solutions.
Run following commands:
hbase hbck -fix
HMaster reboot
If above solution fails
Restart the Master
If above solution fails
Restart The Zookeeper
If everything fails
Restart the cluster
More details can be found at this url:
JIRA Ticket
I am using Hadoop2.3.0 and have installed it as single node cluster (psuedo-distributed mode) on CentOS 6.4 Amazon ec2 instance with an instance storage of 420GB and 7.5GB of RAM , my understanding is that the " Spill Failed " exception only occurs when the node runs out of the disk space however , after running map/reduce tasks for only a short amount of time (no where near to 420 GB of data ) I get the following exception.
I would like to mention that I moved the Hadoop installation on the same node from a EBS volume of 8GB(where I had installed it originally) to an instance store volume of 420GB on the same node and changed the $HADOOP_HOME environment variable and other properties to point to the instance store volume accordingly and the Hadoop2.3.0 is now completely contained in the 420GB drive.
However I still see the following exception , can you please let me know if there is anything besides Diskspace that can cause the Spill Failed exception ?
2014-02-28 15:35:07,630 ERROR [IPC Server handler 12 on 58189] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1393591821307_0013_m_000000_0 - exited :
java.io.IOException: Spill failed
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.checkSpillException(MapTask.java:1533)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1442)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:437)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Unknown Source)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for attempt_1393591821307_0013_m_000000_0_spill_26.out
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:402)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
at org.apache.hadoop.mapred.YarnOutputFiles.getSpillFileForWrite(YarnOutputFiles.java:159)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1564)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.access$900(MapTask.java:853)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer$SpillThread.run(MapTask.java:1503)
2014-02-28 15:35:07,604 WARN [main] org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:root (auth:SIMPLE) cause:java.io.IOException: Spill failed
2014-02-28 15:35:07,605 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: Spill failed
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.checkSpillException(MapTask.java:1533)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1442)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:437)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Unknown Source)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for attempt_1393591821307_0013_m_000000_0_spill_26.out
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:402)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
at org.apache.hadoop.mapred.YarnOutputFiles.getSpillFileForWrite(YarnOutputFiles.java:159)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1564)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.access$900(MapTask.java:853)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer$SpillThread.run(MapTask.java:1503)
I was able to solve this by setting the hadoop.tmp.dir value to something on the instace storage , by default it was pointing to the EBS backed root volume.