Problem with Big Sur 11.0.1 and PC/SC library - java

I have a problem with the newest version of macOS (BigSur 11.0.1) and the library PC/SC; before BigSur the program the uses the library worked fine but after the update isn't working anymore. I am using the java version 1.8.0_271
In the code, I use the method TerminalFactory.getDefaultType() to get the default type of Terminal Factory. Before the update I was receiving "PC/SC" but after the update I am receiving None.
If I want force to connect to an instance with this line
TerminalFactory factory = TerminalFactory.getInstance("PC/SC", null);
It will return the following error:
java.security.NoSuchAlgorithmException: Error constructing implementation (algorithm: PC/SC, provider: SunPCSC, class: sun.security.smartcardio.SunPCSC$Factory)
at java.security.Provider$Service.newInstance(Provider.java:1711)
at sun.security.jca.GetInstance.getInstance(GetInstance.java:243)
at sun.security.jca.GetInstance.getInstance(GetInstance.java:190)
at javax.smartcardio.TerminalFactory.getInstance(TerminalFactory.java:245)
at prueba.Prueba.isConnected(Prueba.java:165)
at prueba.Prueba.main(Prueba.java:63)
Caused by: java.lang.UnsupportedOperationException: PC/SC not available on this platform
at sun.security.smartcardio.PCSC.checkAvailable(PCSC.java:46)
at sun.security.smartcardio.SunPCSC$Factory.<init>(SunPCSC.java:59)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.security.Provider$Service.newInstance(Provider.java:1703)
... 5 more
Caused by: java.io.IOException: No PC/SC library found on this system
at sun.security.smartcardio.PlatformPCSC.getLibraryName(PlatformPCSC.java:122)
at sun.security.smartcardio.PlatformPCSC.access$000(PlatformPCSC.java:43)
at sun.security.smartcardio.PlatformPCSC$1.run(PlatformPCSC.java:64)
at sun.security.smartcardio.PlatformPCSC$1.run(PlatformPCSC.java:60)
at java.security.AccessController.doPrivileged(Native Method)
at sun.security.smartcardio.PlatformPCSC.<clinit>(PlatformPCSC.java:60)
at sun.security.smartcardio.SunPCSC$Factory.<init>(SunPCSC.java:59)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.security.Provider$Service.newInstance(Provider.java:1703)
at sun.security.jca.GetInstance.getInstance(GetInstance.java:243)
at sun.security.jca.GetInstance.getInstance(GetInstance.java:190)
at javax.smartcardio.TerminalFactory.getInstance(TerminalFactory.java:245)
at javax.smartcardio.TerminalFactory.<clinit>(TerminalFactory.java:106)
at prueba.Prueba.isConnected(Prueba.java:164)
... 1 more
entro isConnected--2
Exception in thread "main" java.lang.NullPointerException
at prueba.Prueba.isConnected(Prueba.java:173)
at prueba.Prueba.main(Prueba.java:63)
I found that Big Sur eliminates the library PC/SC and it is no possible to install it.
I donĀ“t know if there is someone with the same error or someone that has already fix it.
Thanks for the help.

Because of the changes in macOS Big Sur, Java PC/SC implementation no longer works correctly:
https://bugs.openjdk.java.net/browse/JDK-8255877
The workaround is to set the system property:
sun.security.smartcardio.library=/System/Library/Frameworks/PCSC.framework/Versions/Current/PCSC
before trying to use TerminalFactory.

Related

Kafka Structured Streaming KafkaSourceProvider could not be instantiated

I am working on a streaming project where I have a kafka stream of ping statistics like so :
64 bytes from vas.fractalanalytics.com (192.168.30.26): icmp_seq=1 ttl=62 time=0.913 ms
64 bytes from vas.fractalanalytics.com (192.168.30.26): icmp_seq=2 ttl=62 time=0.936 ms
64 bytes from vas.fractalanalytics.com (192.168.30.26): icmp_seq=3 ttl=62 time=0.980 ms
64 bytes from vas.fractalanalytics.com (192.168.30.26): icmp_seq=4 ttl=62 time=0.889 ms
I am trying to read this as a structured stream in pyspark. I start pyspark with the following command :
pyspark --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.0
Pyspark version is 2.4, python version is 2.7 (tried with 3.6 as well)
And I get an error as soon as I send this piece of code (followed from Structured Streaming + Kafka Integration Guide):
df = spark.readStream.format("kafka").option("kafka.bootstrap.servers", "172.18.2.21:2181").option("subscribe", "ping-stats").load()
I run into the following error :
py4j.protocol.Py4JJavaError: An error occurred while calling o37.load.
: java.util.ServiceConfigurationError: org.apache.spark.sql.sources.DataSourceRegister: Provider org.apache.spark.sql.kafka010.KafkaSourceProvider could not be instantiated
at java.util.ServiceLoader.fail(ServiceLoader.java:232)
at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:43)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247)
at scala.collection.TraversableLike$class.filter(TraversableLike.scala:259)
at scala.collection.AbstractTraversable.filter(Traversable.scala:104)
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:630)
at org.apache.spark.sql.streaming.DataStreamReader.load(DataStreamReader.scala:161)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: org.apache.spark.internal.Logging.$init$(Lorg/apache/spark/internal/Logging;)V
at org.apache.spark.sql.kafka010.KafkaSourceProvider.<init>(KafkaSourceProvider.scala:44)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
... 23 more
Can someone help me out with this?
I managed to solve this by ensuring that the spark-sql-kafka package's version matches the spark version.
In my case, I am now using --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.1 for my spark version 2.4.1, thereafter the .format("kafka") part of the code can be resolved.
Also, v2.12 of the package (i.e., org.apache.spark:spark-sql-kafka-0-10_2.12:2.4.1) does not seem stable at the time of writing, and using it will also cause the above error.
*EDIT: v2.12 spark-sql-kafka packages seem to only work with Spark built with Scala v2.12. Hence, for Spark v2.X versions (pre-built with Scala v2.11 by default), there's a need to instead use Spark binaries built with Scala v2.12 (e.g. spark-2.4.1-bin-without-hadoop-scala-2.12.tgz) if you really want to use spark-sql-kafka v2.12 package. For Spark v3.X, they are pre-built with Scala v2.12 by default, hence you'll only see/use v2.12 of the package.
Issue solved when used the Spark version 2.3.2 with Scala version 2.11.11 and dependency org.apache.spark:sl-kafka-0-10_2.10:2.0.2
#spark-shell -i Process.scala --master local[2] --packages org.apache.spark:sl-kafka-0-10_2.10:2.0.2 ...

Failed to run elasticsearch --version command

Elasticsearch Version - 5.5.1,
Java Version - 8-u131
When I run the elasticsearch version command(or elasticsearch-plugin install), instead of displaying the version it shows this stacktrace:
>./elasticsearch --version
Exception in thread "main" java.lang.IllegalArgumentException: Invalid Configuration class specified
at org.apache.logging.log4j.core.config.builder.impl.DefaultConfigurationBuilder.build(DefaultConfigurationBuilder.java:198)
at org.apache.logging.log4j.core.config.builder.impl.DefaultConfigurationBuilder.build(DefaultConfigurationBuilder.java:159)
at org.apache.logging.log4j.core.config.builder.impl.DefaultConfigurationBuilder.build(DefaultConfigurationBuilder.java:55)
at org.elasticsearch.common.logging.LogConfigurator.configureStatusLogger(LogConfigurator.java:175)
at org.elasticsearch.common.logging.LogConfigurator.configureWithoutConfig(LogConfigurator.java:99)
at org.elasticsearch.cli.Command.main(Command.java:85)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
at java.lang.reflect.Constructor.newInstance(Unknown Source)
at org.apache.logging.log4j.core.config.builder.impl.DefaultConfigurationBuilder.build(DefaultConfigurationBuilder.java:170)
... 7 more
Caused by: java.lang.NoSuchMethodError: org.apache.logging.log4j.core.config.AbstractConfiguration.<init>(Lorg/apache/logging/log4j/core/LoggerContext;Lorg/apache/logging/log4j/core/config/ConfigurationSource;)V
at org.apache.logging.log4j.core.config.builder.impl.BuiltConfiguration.<init>(BuiltConfiguration.java:58)
... 12 more
This seems like a case of missing/incorrect versioning of log4j jars packaged with elasticsearch 5.5.1 rpm.
It ships with 2.9.1 and missing method appeared in 2.7.x. Do you have the same exception with zip version? You might want to add set -x as second line in elasticsearch script file and check what is passed in -cp "$ES_CLASSPATH" - maybe it picks up some old jars from some other path.

Cannot setup Apache Spark 2.1.1 on Windows 10

I have installed Apache Spark 2.1.1 on Windows 10, with Java 1.8 and Python version 3.6 Anaconda 4.3.1. I have also downloaded the winutils.exe and setup environment avriables for JAVA_HOME, HADOOP_HOME and SPARK_HOME as well as updated the path variable. I have also run winutils.exe chmod -R 777 \tmp\hive. But I am getting the below error when running pyspark in cmd prompt.
Please please can someone help, let me know if I missed out any important detail
Thanks in advance!
c:\Spark>bin\pyspark
Python 3.6.0 |Anaconda 4.3.1 (64-bit)| (default, Dec 23 2016, 11:57:41) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Traceback (most recent call last):
File "c:\Spark\python\pyspark\sql\utils.py", line 63, in deco
return f(*a, **kw)
File "c:\Spark\python\lib\py4j-0.10.4-src.zip\py4j\protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: **An error occurred while calling o22.sessionState.
: java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState':**
at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$reflect(SparkSession.scala:981)
at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:110)
at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
I still get errors when launching [spark-shell], but it looks like Spark launches since I get the 'Welcome to Spark' piece. The error I get is
C:\Spark>bin\spark-shell
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/06/23 12:20:15 WARN General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/C:/Spark/jars/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/C:/Spark/bin/../jars/datanucleus-api-jdo-3.2.6.jar."
17/06/23 12:20:15 WARN General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/C:/Spark/jars/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/C:/Spark/bin/../jars/datanucleus-rdbms-3.2.9.jar."
17/06/23 12:20:15 WARN General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/C:/Spark/bin/../jars/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/C:/Spark/jars/datanucleus-core-3.2.10.jar."
java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState':
at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$reflect(SparkSession.scala:981)
at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:110)
at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:109)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$getOrCreate$5.apply(SparkSession.scala:878)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$getOrCreate$5.apply(SparkSession.scala:878)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:878)
at org.apache.spark.repl.Main$.createSparkSession(Main.scala:96)
... 47 elided
Caused by: java.lang.reflect.InvocationTargetException:
java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveExternalCatalog':
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$reflect(SparkSession.scala:978)
... 58 more
Caused by: java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveExternalCatalog':
at org.apache.spark.sql.internal.SharedState$.org$apache$spark$sql$internal$SharedState$$reflect(SharedState.scala:169)
at org.apache.spark.sql.internal.SharedState.<init>(SharedState.scala:86)
at org.apache.spark.sql.SparkSession$$anonfun$sharedState$1.apply(SparkSession.scala:101)
at org.apache.spark.sql.SparkSession$$anonfun$sharedState$1.apply(SparkSession.scala:101)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession.sharedState$lzycompute(SparkSession.scala:101)
at org.apache.spark.sql.SparkSession.sharedState(SparkSession.scala:100)
at org.apache.spark.sql.internal.SessionState.<init>(SessionState.scala:157)
at org.apache.spark.sql.hive.HiveSessionState.<init>(HiveSessionState.scala:32)
... 63 more
Caused by: java.lang.reflect.InvocationTargetException: java.lang.reflect.InvocationTargetException: java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Ljava/lang/String;I)V
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.sql.internal.SharedState$.org$apache$spark$sql$internal$SharedState$$reflect(SharedState.scala:166)
... 71 more
Caused by: java.lang.reflect.InvocationTargetException:
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Ljava/lang/String;I)V
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:358)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:262)
at org.apache.spark.sql.hive.HiveExternalCatalog.<init>(HiveExternalCatalog.scala:66)
... 76 more
Caused by: java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Ljava/lang/String;I)V
at org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode(NativeIO.java:524)
at org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:478)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:532)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:509)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:305)
at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:639)
at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:561)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:188)
... 84 more
14: error: not found: value spark
import spark.implicits._
^
14: error: not found: value spark
import spark.sql
^
Welcome to
Setup that worked for me is as follows "i didn't use winutils.exe":-
install pyspark and findspark using "Anaconda Command Prompt" as
pip3 install pyspark
and
pip3 install findspark
as you have already downloaded spark setup. unzip it and keep it in "C" drive i.e. "C:\spark-2.2.0-bin-hadoop2.7" and create new environment variable "SPARK_HOME" and set it to "C:\spark-2.2.0-bin-hadoop2.7\bin" and open "path" variable in system variables and add the same there as well.
now open your command prompt and come from "C:\User*" to "C:\" by doing cd.. twice and run following command
set SPARK_HOME='spark-2.2.0-bin-hadoop2.7'
and you are good to go.
Now you just have to import the file location before importing pyspark in your jupyter notebook. use following code:-
import findspark
findspark.init('C:\spark-2.2.0-bin-hadoop2.7')
import pyspark

HBase crashes after few seconds

I have been following the HBase installation instructions as given on this page:
https://hbase.apache.org/book.html#quickstart
I am using HBase version 1.1.3 (stable release) and have configured in standalone mode. I have installed OpenJDK 7
When I try to start hbase it crashes after few seconds and I get the following error:
2016-01-29 17:37:04,136 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster
at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:143)
at org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:219)
at org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:155)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:224)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2355)
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hbase.util.Bytes$LexicographicalComparerHolder$UnsafeComparer
at org.apache.hadoop.hbase.util.Bytes.putInt(Bytes.java:899)
at org.apache.hadoop.hbase.KeyValue.createByteArray(KeyValue.java:1082)
at org.apache.hadoop.hbase.KeyValue.<init>(KeyValue.java:652)
at org.apache.hadoop.hbase.KeyValue.<init>(KeyValue.java:580)
at org.apache.hadoop.hbase.KeyValue.<init>(KeyValue.java:483)
at org.apache.hadoop.hbase.KeyValue.<init>(KeyValue.java:370)
at org.apache.hadoop.hbase.KeyValue.<clinit>(KeyValue.java:267)
at org.apache.hadoop.hbase.HConstants.<clinit>(HConstants.java:978)
at org.apache.hadoop.hbase.HTableDescriptor.<clinit>(HTableDescriptor.java:1488)
at org.apache.hadoop.hbase.util.FSTableDescriptors.<init>(FSTableDescriptors.java:124)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:570)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:365)
at org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.<init>(HMasterCommandLine.java:307)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:139)
... 7 more
Can anyone tell me the reason for this error ?
Please let me know if you need more info.
The exception causing this is NoClassDefFoundError which usually means you are missing something from your classpath. In this you're missing org.apache.hadoop.hbase.util.Bytes which comes from hbase-common.jar. However, if one is missing, there are probably others too. This might mean you're starting HBase the wrong way. Have you tried the included start-up scripts?

NoSuchMethodError com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Builder.maximumWeightedCapacity

I am running a Java test program to establish a connection to OrientDB and keep getting these exceptions when I run the code from within IntelliJ IDEA or OpenFire (xmpp server):
java.lang.NoSuchMethodError:
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Builder.maximumWeightedCapacity(J)Lcom/googlecode/concurrentlinkedhashmap/ConcurrentLinkedHashMap$Builder;
at
com.orientechnologies.orient.core.db.record.ridbag.sbtree.OSBTreeCollectionManagerAbstract.(OSBTreeCollectionManagerAbstract.java:43)
at
com.orientechnologies.orient.core.db.record.ridbag.sbtree.OSBTreeCollectionManagerAbstract.(OSBTreeCollectionManagerAbstract.java:48)
at
com.orientechnologies.orient.client.remote.OSBTreeCollectionManagerRemote.(OSBTreeCollectionManagerRemote.java:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method) at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at
java.lang.Class.newInstance(Class.java:379) at
com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx$2.call(ODatabaseDocumentTx.java:2863)
at
com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx$2.call(ODatabaseDocumentTx.java:2854)
at
com.orientechnologies.common.concur.resource.OSharedContainerImpl.getResource(OSharedContainerImpl.java:64)
at
com.orientechnologies.orient.core.storage.OStorageAbstract.getResource(OStorageAbstract.java:143)
at
com.orientechnologies.orient.client.remote.OStorageRemoteThread.getResource(OStorageRemoteThread.java:658)
at
com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.initAtFirstOpen(ODatabaseDocumentTx.java:2853)
at
com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.open(ODatabaseDocumentTx.java:260)
at
com.orientechnologies.orient.jdbc.OrientJdbcConnection.(OrientJdbcConnection.java:63)
at
com.orientechnologies.orient.jdbc.OrientJdbcDriver.connect(OrientJdbcDriver.java:52)
at java.sql.DriverManager.getConnection(DriverManager.java:571) at
java.sql.DriverManager.getConnection(DriverManager.java:187) at
com.momentum.orientdb.core.JdbcConnectionManager.getConnection(JdbcConnectionManager.java:87)
at
com.momentum.orientdb.core.JdbcConnectionManager.getConnection(JdbcConnectionManager.java:56)
at
com.momentum.orientdb.core.JdbcConnectionManager$getConnection.call(Unknown
Source) at
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
at
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:112)
at
com.momentum.orientdb.core.test.JdbcConnectionManagerTest.testGetConnection(JdbcConnectionManagerTest.groovy:26)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
at org.junit.runner.JUnitCore.run(JUnitCore.java:160) at
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:78)
at
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:212)
at
com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
Any idea why? How can I fix it?
UPDATE 1
It seems to me that it has to do with this library (from Google): concurrentlinkedhashmap-lru-1.4.1.jar.
But I searched the development computer's file system and found only the above mentioned version (1.4.1).
UPDATE 2 - Solution
After searching thoroughly I found that I had different version of concurrentlinkedhashmap-lru-xxxx.jar on my development machine's file system and somehow I "managed" to put two different versions in the build.
Meanwhile, I still get the same exceptions when using my library (which I've already tested) in order to establish a connection to OrientDb from within OpenFire since OpenFire uses another version of concurrentlinkedhashmap then the one being used by OrientDb (concurrentlinkedhashmap-lru-1.4.1.jar).
NoSuchMethodError happens because at runtime, Java tried to call a method on an object, and found it didn't exist, in particular:
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Builder.maximumWeightedCapacity(J)Lcom/googlecode/concurrentlinkedhashmap/ConcurrentLinkedHashMap$Builder;
Of course, you'd expect the code wouldn't compile if that was the case, so this almost always means you've built against a different version of the class than the one provided at runtime - it's usually a versioning issue.
So, you should figure out which library you're getting ConcurrentLinkedHashMap from, and compare this with the version being used at runtime (how are you deploying this? Is it running straight from IntelliJ?). You're likely to find that they have different versions, and you'll need to alter your version to match.
Please go to the below link and download the JAR file,
http://mvnrepository.com/artifact/com.googlecode.concurrentlinkedhashmap/concurrentlinkedhashmap-lru/1.4.2
The problem is with the version which you are trying, the correct version is 1.4.2.

Categories

Resources