Import data from MapReduce to HBase (TableOutputFormat error) - java

A am trying to save data from MapReduce job into HBase. We made script which work great on older versions of Hadoop (CDH3u4). Now we upgraded to the newest version (CDH 5.0.2) and script is not working.
When I run the program on newest version, I get following error:
Exception in thread "main" java.lang.RuntimeException: java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf(TableOutputFormat.java:211)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:455)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:343)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1313)
at com.nrholding.t0_mr.main.ELogHBaseImport.main(ELogHBaseImport.java:89)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:389)
at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:366)
at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:247)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:188)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:150)
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf(TableOutputFormat.java:206)
... 17 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:387)
... 22 more
Caused by: java.lang.NoClassDefFoundError: org/cloudera/htrace/Trace
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:195)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:479)
at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65)
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:83)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:801)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:633)
... 27 more
Caused by: java.lang.ClassNotFoundException: org.cloudera.htrace.Trace
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 33 more
It seams that problem is in TableOutputFormat. So I checked that:
Proper library is in libpath.
Zookeeper quorum and zookeeper port is set in hbase-site.xml
Table wp_json exists in hbase
Here is the code which is makes problems:
public static void main(String args[]) throws Exception {
Configuration conf = new Configuration();
conf.set("hbase.zookeeper.quorum", "zookeeper_server1,zookeeper_server2,zookeeper_server3");
conf.set(TableOutputFormat.OUTPUT_TABLE, "wp_json");
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
String input = otherArgs[0];
Job job = Job.getInstance(conf, "ELogHBaseImport");
// Input is just text files in HDFS
FileInputFormat.addInputPath(job, new Path(input));
job.setJarByClass(ELogHBaseImport.class);
job.setMapperClass(Map.class);
job.setNumReduceTasks(0);
job.setOutputFormatClass(TableOutputFormat.class);
job.waitForCompletion(true);
}
When I use NullOutputFormat, everything works great but nothing is written to hbase.
The part of TableOutputFormat responsible for error is here:
163 /**
164 * Returns the output committer.
165 *
166 * #param context The current context.
167 * #return The committer.
168 * #throws IOException When creating the committer fails.
169 * #throws InterruptedException When the job is aborted.
170 * #see org.apache.hadoop.mapreduce.OutputFormat#getOutputCommitter(org.apache.hadoop.mapreduce.TaskAttemptContext)
171 */
172 #Override
173 public OutputCommitter getOutputCommitter(TaskAttemptContext context)
174 throws IOException, InterruptedException {
175 return new TableOutputCommitter();
176 }
177
178 public Configuration getConf() {
179 return conf;
180 }
181
182 #Override
183 public void setConf(Configuration otherConf) {
184 this.conf = HBaseConfiguration.create(otherConf);
185
186 String tableName = this.conf.get(OUTPUT_TABLE);
187 if(tableName == null || tableName.length() <= 0) {
188 throw new IllegalArgumentException("Must specify table name");
189 }
190
191 String address = this.conf.get(QUORUM_ADDRESS);
192 int zkClientPort = this.conf.getInt(QUORUM_PORT, 0);
193 String serverClass = this.conf.get(REGION_SERVER_CLASS);
194 String serverImpl = this.conf.get(REGION_SERVER_IMPL);
195
196 try {
197 if (address != null) {
198 ZKUtil.applyClusterKeyToConf(this.conf, address);
199 }
200 if (serverClass != null) {
201 this.conf.set(HConstants.REGION_SERVER_IMPL, serverImpl);
202 }
203 if (zkClientPort != 0) {
204 this.conf.setInt(HConstants.ZOOKEEPER_CLIENT_PORT, zkClientPort);
205 }
206 this.table = new HTable(this.conf, tableName);
207 this.table.setAutoFlush(false, true);
208 LOG.info("Created table instance for " + tableName);
209 } catch(IOException e) {
210 LOG.error(e);
211 throw new RuntimeException(e);
212 }
213 }

Error is actually caused by this message:
Caused by: java.lang.ClassNotFoundException: org.cloudera.htrace.Trace*
Probably, you are missing a jar in the classpath. The class mentioned above may be indirectly referred from your code. Try to put the jar containing this class in classpath.
Hope this helps!!!

Related

Pyspark: python worker failed to connect back/socketTimeOut

I am very new to pyspark and still somewhat of a beginner with python. I'm taking an online course to better learn pyspark, but I cannot get the following code to run. I've tried both collect and take to see if i can test the initial result after parsing lines, but that doesn't seem to work. I have attached the error message provided by jupyter notebook on my local windows machine.
import findspark
findspark.init()
from pyspark import SparkConf, SparkContext
conf = SparkConf().setMaster("local").setAppName("MinTemperatures")
conf.set("spark.network.timeout", "600s")
sc = SparkContext.getOrCreate()
def parseLine(line):
fields = line.split(',')
stationID = fields[0]
entryType = fields[2]
temperature = fields[3]
return (stationID, entryType, temperature)
file="C:/sparkCourse/1800.csv"
lines = sc.textFile(file)
parsedLines = lines.map(parseLine)
minTemps = parsedLines.filter(lambda x: "TMIN" in x[1])
stationTemps = minTemps.map(lambda x: (x[0], x[2]))
minTemps = stationTemps.reduceByKey(lambda x, y: min(x,y))
lines.take(5)
#results = minTemps.collect();
#for result in results:
# print(result[0] + "\t{:.2f}F".format(result[1]))
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<ipython-input-4-6a97a8b6e79c> in <module>
21 minTemps = stationTemps.reduceByKey(lambda x, y: min(x,y))
22
---> 23 lines.take(5)
24 #results = minTemps.collect();
25
C:\spark\python\pyspark\rdd.py in take(self, num)
1358
1359 p = range(partsScanned, min(partsScanned + numPartsToTry, totalParts))
-> 1360 res = self.context.runJob(self, takeUpToNumLeft, p)
1361
1362 items += res
C:\spark\python\pyspark\context.py in runJob(self, rdd, partitionFunc, partitions, allowLocal)
1049 # SparkContext#runJob.
1050 mappedRDD = rdd.mapPartitions(partitionFunc)
-> 1051 sock_info = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
1052 return list(_load_from_socket(sock_info, mappedRDD._jrdd_deserializer))
1053
C:\spark\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py in __call__(self, *args)
1255 answer = self.gateway_client.send_command(command)
1256 return_value = get_return_value(
-> 1257 answer, self.gateway_client, self.target_id, self.name)
1258
1259 for temp_arg in temp_args:
C:\spark\python\pyspark\sql\utils.py in deco(*a, **kw)
61 def deco(*a, **kw):
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString()
C:\spark\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
--> 328 format(target_id, ".", name), value)
329 else:
330 raise Py4JError(
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:153)
at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
... 1 more
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
You created SparkConf, but then you didn't pass it to SparkContext. I think it should look something like this:
conf = SparkConf().setMaster("local").setAppName("MinTemperatures").set("spark.network.timeout", "600s")
sc = SparkContext(conf=conf)

Spark job failed due to not serializable objects

I'm running a spark job to generate HFiles for my HBase data store.
It used to be working fine with my Cloudera cluster, but when we switched to EMR cluster, it fails with following stacktrace:
Serialization stack:
- object not serializable (class: org.apache.hadoop.hbase.io.ImmutableBytesWritable, value: 50 31 36 31 32 37 30 33 34 5f 49 36 35 38 34 31 35 38 35); not retrying
Serialization stack:
- object not serializable (class: org.apache.hadoop.hbase.io.ImmutableBytesWritable, value: 50 31 36 31 32 37 30 33 34 5f 49 36 35 38 34 31 35 38 35)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1505)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1493)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1492)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1492)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:803)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:803)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:803)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1720)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1675)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1664)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:629)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1925)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1938)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1158)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1085)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:1005)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:996)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:996)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:996)
at org.apache.spark.api.java.JavaPairRDD.saveAsNewAPIHadoopFile(JavaPairRDD.scala:823)
My questions:
What could cause the difference between the two runs? Version difference between the two clusters?
I did research and found this post: then I added the Kyro parameters into my spark-submit command, now my command looks like below:
spark-submit --conf spark.kryo.classesToRegister=org.apache.hadoop.hbase.io.ImmutableBytesWritable,org.apache.hadoop.hbase.KeyValue --master yarn --deploy-mode client --driver-memory 16G --executor-memory 18G ... but still, I got the same error.
Here's my Java code:
protected void generateHFilesUsingSpark(JavaRDD<Row> rdd) throws Exception {
JavaPairRDD<ImmutableBytesWritable, KeyValue> javaPairRdd = rdd.mapToPair(
new PairFunction<Row, ImmutableBytesWritable, KeyValue>() {
public Tuple2<ImmutableBytesWritable, KeyValue> call(Row row) throws Exception {
String key = (String) row.get(0);
String value = (String) row.get(1);
ImmutableBytesWritable rowKey = new ImmutableBytesWritable();
byte[] rowKeyBytes = Bytes.toBytes(key);
rowKey.set(rowKeyBytes);
KeyValue keyValue = new KeyValue(rowKeyBytes,
Bytes.toBytes("COL"),
Bytes.toBytes("FM"),
ProductJoin.newBuilder()
.setId(key)
.setSolrJson(value)
.build().toByteArray());
return new Tuple2<ImmutableBytesWritable, KeyValue>(rowKey, keyValue);
}
});
Configuration baseConf = HBaseConfiguration.create();
Configuration conf = new Configuration();
conf.set(HBASE_ZOOKEEPER_QUORUM, "xxx.xxx.xx.xx");
Job job = new Job(baseConf, "APP-NAME");
HTable table = new HTable(conf, "hbaseTargetTable");
Partitioner partitioner = new IntPartitioner(importerParams.shards);
JavaPairRDD<ImmutableBytesWritable, KeyValue> repartitionedRdd =
javaPairRdd.repartitionAndSortWithinPartitions(partitioner);
HFileOutputFormat2.configureIncrementalLoad(job, table);
System.out.println("Done configuring incremental load....");
Configuration config = job.getConfiguration();
repartitionedRdd.saveAsNewAPIHadoopFile(
"hfilesOutputPath",
ImmutableBytesWritable.class,
KeyValue.class,
HFileOutputFormat2.class,
config
);
System.out.println("Saved to HFiles to: " + importerParams.hfilesOutputPath);
}
All right, problem solved, the trick is to use KyroSerializer, I added this in my Java code to register ImmutableBytesWritable.
SparkSession.Builder builder = SparkSession.builder().appName("AWESOME");
builder.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
SparkConf conf = new SparkConf().setAppName("AWESOME");
Class<?>[] classes = new Class[]{org.apache.hadoop.hbase.io.ImmutableBytesWritable.class};
conf.registerKryoClasses(classes);
builder.config(conf);
SparkSession spark = builder.getOrCreate();

Javasist throws javassist.CannotCompileException when instrumenting setString method of org.h2.jdbc.JdbcPreparedStatement

um trying to instrument jdbc methods while a server is running. I have tried it by instrumenting setString, setInt methods and executeQuery method while a simple mysql query is running as it is given in JDBC examples. It works totally fine when i instrument that setString method by injecting following line.
private void injectSetVariableMethods(CtMethod method) {
if (isInEnum(method.getName().toUpperCase(), SetMethods.class)) {
try {
method.insertAt(1, true,
"javaagent.JDBCPublisher.fillArrayList(String.valueOf($2), " +
"Thread.currentThread().getStackTrace()[1].getMethodName().toUpperCase());"
);
} catch (CannotCompileException e) {
e.printStackTrace();
}
}
}
But now, when i run it with the server which use h2, it gives the following exception.
javassist.CannotCompileException: by javassist.bytecode.BadBytecode: setString (ILjava/lang/String;)V in org.h2.jdbc.JdbcPreparedStatement: failed to resolve types
at javassist.CtBehavior.insertAt(CtBehavior.java:1210)
at javaagent.JDBCTransformer.injectSetVariableMethods(JDBCClassTransformer.java:212)
at javaagent.JDBCTransformer.transform(JDBCClassTransformer.java:99)
at sun.instrument.TransformerManager.transform(TransformerManager.java:188)
at sun.instrument.InstrumentationImpl.transform(InstrumentationImpl.java:424)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.defineClass(DefaultClassLoader.java:188)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.defineClassHoldingLock(ClasspathManager.java:638)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.defineClass(ClasspathManager.java:613)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findClassImpl(ClasspathManager.java:574)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClassImpl(ClasspathManager.java:492)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClass(ClasspathManager.java:465)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.findLocalClass(DefaultClassLoader.java:216)
at org.eclipse.osgi.internal.loader.BundleLoader.findLocalClass(BundleLoader.java:395)
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:464)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:234)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.tomcat.jdbc.pool.ProxyConnection.invoke(ProxyConnection.java:126)
at org.apache.tomcat.jdbc.pool.JdbcInterceptor.invoke(JdbcInterceptor.java:109)
at org.carbon.ndatasource.rdbms.ConnectionRollbackOnReturnInterceptor.invoke(ConnectionRollbackOnReturnInterceptor.java:51)
at org.apache.tomcat.jdbc.pool.JdbcInterceptor.invoke(JdbcInterceptor.java:109)
at org.apache.tomcat.jdbc.pool.interceptor.AbstractCreateStatementInterceptor.invoke(AbstractCreateStatementInterceptor.java:67)
at org.apache.tomcat.jdbc.pool.JdbcInterceptor.invoke(JdbcInterceptor.java:109)
at org.apache.tomcat.jdbc.pool.interceptor.ConnectionState.invoke(ConnectionState.java:153)
at org.apache.tomcat.jdbc.pool.JdbcInterceptor.invoke(JdbcInterceptor.java:109)
at org.apache.tomcat.jdbc.pool.TrapException.invoke(TrapException.java:41)
at org.apache.tomcat.jdbc.pool.JdbcInterceptor.invoke(JdbcInterceptor.java:109)
at org.apache.tomcat.jdbc.pool.DisposableConnectionFacade.invoke(DisposableConnectionFacade.java:80)
at com.sun.proxy.$Proxy18.prepareStatement(Unknown Source)
at org.carbon.user.core.claim.dao.ClaimDAO.getDialectCount(ClaimDAO.java:160)
at org.carbon.user.core.common.DefaultRealm.populateProfileAndClaimMaps(DefaultRealm.java:429)
at org.carbon.user.core.common.DefaultRealm.init(DefaultRealm.java:105)
at org.carbon.user.core.common.DefaultRealmService.initializeRealm(DefaultRealmService.java:230)
at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:96)
at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:109)
at org.carbon.user.core.internal.Activator.startDeploy(Activator.java:68)
at org.wso2.carbon.user.core.internal.BundleCheckActivator.start(BundleCheckActivator.java:61)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(BundleContextImpl.java:711)
at java.security.AccessController.doPrivileged(Native Method)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:702)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:683)
at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:381)
at org.eclipse.osgi.framework.internal.core.AbstractBundle.resume(AbstractBundle.java:390)
at org.eclipse.osgi.framework.internal.core.Framework.resumeBundle(Framework.java:1176)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.resumeBundles(StartLevelManager.java:559)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.resumeBundles(StartLevelManager.java:544)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.incFWSL(StartLevelManager.java:457)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.doSetStartLevel(StartLevelManager.java:243)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.dispatchEvent(StartLevelManager.java:438)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.dispatchEvent(StartLevelManager.java:1)
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:340)
Caused by: javassist.bytecode.BadBytecode: setString (ILjava/lang/String;)V in org.h2.jdbc.JdbcPreparedStatement: failed to resolve types
at javassist.bytecode.stackmap.MapMaker.make(MapMaker.java:111)
at javassist.bytecode.MethodInfo.rebuildStackMap(MethodInfo.java:423)
at javassist.bytecode.MethodInfo.rebuildStackMapIf6(MethodInfo.java:405)
at javassist.CtBehavior.insertAt(CtBehavior.java:1200)
... 59 more
Caused by: javassist.bytecode.BadBytecode: failed to resolve types
at javassist.bytecode.stackmap.MapMaker.make(MapMaker.java:169)
at javassist.bytecode.stackmap.MapMaker.make(MapMaker.java:108)
... 62 more
Caused by: javassist.NotFoundException: org.h2.value.ValueNull
at javassist.ClassPool.get(ClassPool.java:450)
at javassist.bytecode.stackmap.TypeData$TypeVar.fixTypes2(TypeData.java:345)
at javassist.bytecode.stackmap.TypeData$TypeVar.fixTypes(TypeData.java:330)
at javassist.bytecode.stackmap.TypeData$TypeVar.dfs(TypeData.java:274)
at javassist.bytecode.stackmap.MapMaker.fixTypes(MapMaker.java:394)
at javassist.bytecode.stackmap.MapMaker.make(MapMaker.java:167)
... 63 more
What i am doing with my fillArrayList method is, um passing those values into a ArrayList by checking the method name and adding '' for values set with setString. But it looks like it is instrumenting that method at somepoint, because i am getting the reimplemented queries with '?' replaced with respective values (strings with '' and ints as normal). But once the server has started it throws another set of exceptions which also involve h2.
[2015-10-13 17:18:56,600] ERROR {org.carbon.registry.core.jdbc.dao.JDBCLogsDAO} - Failed to get logs. General error: "java.lang.IndexOutOfBoundsException: Index: 1, Size: 1" [50000-140]
org.h2.jdbc.JdbcSQLException: General error: "java.lang.IndexOutOfBoundsException: Index: 1, Size: 1" [50000-140]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:327)
at org.h2.message.DbException.get(DbException.java:156)
at org.h2.message.DbException.convert(DbException.java:279)
at org.h2.message.DbException.toSQLException(DbException.java:252)
at org.h2.message.TraceObject.logAndConvert(TraceObject.java:386)
at org.h2.jdbc.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:104)
at org.carbon.registry.core.jdbc.dao.JDBCLogsDAO.internalGetLogs(JDBCLogsDAO.java:427)
at org.carbon.registry.core.jdbc.dao.JDBCLogsDAO.getLogList(JDBCLogsDAO.java:317)
at org.carbon.registry.core.jdbc.EmbeddedRegistry.getLogs(EmbeddedRegistry.java:2332)
at org.carbon.registry.core.caching.CacheBackedRegistry.getLogs(CacheBackedRegistry.java:402)
at org.carbon.registry.core.session.UserRegistry.getLogsInternal(UserRegistry.java:1806)
at org.carbon.registry.core.session.UserRegistry.access$3600(UserRegistry.java:60)
at org.carbon.registry.core.session.UserRegistry$37.run(UserRegistry.java:1777)
at org.carbon.registry.core.session.UserRegistry$37.run(UserRegistry.java:1774)
at java.security.AccessController.doPrivileged(Native Method)
at org.carbon.registry.core.session.UserRegistry.getLogs(UserRegistry.java:1774)
at org.carbon.registry.indexing.ResourceSubmitter.submitResource(ResourceSubmitter.java:119)
at org.carbon.registry.indexing.ResourceSubmitter.run(ResourceSubmitter.java:76)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at javaagent.JDBCPublisher.getArrayList(JDBCAgentPublisher.java:151)
at javaagent.JDBCPublisher.modifyOriginalQuery(JDBCAgentPublisher.java:351)
at org.h2.jdbc.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:84)
... 19 more
[2015-10-13 17:18:56,601] WARN {org.carbon.registry.indexing.ResourceSubmitter} - An error occurred while submitting resources for indexing
org.carbon.registry.core.exceptions.RegistryException: Failed to get logs. General error: "java.lang.IndexOutOfBoundsException: Index: 1, Size: 1" [50000-140]
at org.carbon.registry.core.jdbc.dao.JDBCLogsDAO.internalGetLogs(JDBCLogsDAO.java:465)
at org.carbon.registry.core.jdbc.dao.JDBCLogsDAO.getLogList(JDBCLogsDAO.java:317)
at org.carbon.registry.core.jdbc.EmbeddedRegistry.getLogs(EmbeddedRegistry.java:2332)
at org.carbon.registry.core.caching.CacheBackedRegistry.getLogs(CacheBackedRegistry.java:402)
at org.carbon.registry.core.session.UserRegistry.getLogsInternal(UserRegistry.java:1806)
at org.carbon.registry.core.session.UserRegistry.access$3600(UserRegistry.java:60)
at org.wso2.carbon.registry.core.session.UserRegistry$37.run(UserRegistry.java:1777)
at org.carbon.registry.core.session.UserRegistry$37.run(UserRegistry.java:1774)
at java.security.AccessController.doPrivileged(Native Method)
at org.carbon.registry.core.session.UserRegistry.getLogs(UserRegistry.java:1774)
at org.carbon.registry.indexing.ResourceSubmitter.submitResource(ResourceSubmitter.java:119)
at org.carbon.registry.indexing.ResourceSubmitter.run(ResourceSubmitter.java:76)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.h2.jdbc.JdbcSQLException: General error: "java.lang.IndexOutOfBoundsException: Index: 1, Size: 1" [50000-140]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:327)
at org.h2.message.DbException.get(DbException.java:156)
at org.h2.message.DbException.convert(DbException.java:279)
at org.h2.message.DbException.toSQLException(DbException.java:252)
at org.h2.message.TraceObject.logAndConvert(TraceObject.java:386)
at org.h2.jdbc.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:104)
at org.wso2.carbon.registry.core.jdbc.dao.JDBCLogsDAO.internalGetLogs(JDBCLogsDAO.java:427)
... 18 more
Caused by: java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at javaagent.JDBCPublisher.getArrayList(JDBCAgentPublisher.java:151)
at javaagent.JDBCPublisher.modifyOriginalQuery(JDBCAgentPublisher.java:351)
at org.h2.jdbc.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:84)
... 19 more
It throws bunch of IndexOutOfBoundExceptions repeatedly with a properly assigned sql query. What could have making this issue..... what should i do to correct this?
Find below a stripped down example, which prints the values of each invocation of the method org.h2.jdbc.JdbcPreparedStatement.setString(int, String).
Following directory structure and content is assumed:
./instrumented/
h2-1.4.186.jar
javassist-3.7.ga.jar
SetStringDemo.java
execution of the example
javac -cp javassist-3.7.ga.jar;h2-1.4.186.jar SetStringDemo.java
java -cp .;instrumented/;javassist-3.7.ga.jar;h2-1.4.186.jar SetStringDemo
output
instrument class org.h2.jdbc.JdbcPreparedStatement
create test database and insert some rows
idx: 2 value: 'name 0'
idx: 2 value: 'name 1'
idx: 2 value: 'name 2'
idx: 2 value: 'name 3'
idx: 2 value: 'name 4'
So the problem is most probably in the way you instrument the class.
Code used for the example.
import java.nio.file.Files;
import java.nio.file.Paths;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import javassist.ClassPool;
import javassist.CtClass;
import javassist.CtMethod;
public class SetStringDemo {
// exception handling left out for the PoC
public static void main(String[] args) throws Exception {
if (Files.deleteIfExists(
Paths.get("instrumented/org/h2/jdbc/JdbcPreparedStatement.class")
)) {
System.out.println("previously instrumented class removed");
}
System.out.println("instrument class org.h2.jdbc.JdbcPreparedStatement");
ClassPool pool = ClassPool.getDefault();
CtClass clazz = pool.get("org.h2.jdbc.JdbcPreparedStatement");
CtMethod method = clazz.getDeclaredMethod("setString");
method.insertAt(1,
"System.out.println(\"idx: \" + $1 + \" value: '\" + $2 + \"'\");"
);
clazz.writeFile("instrumented/");
System.out.println("create test database and insert some rows");
try (Connection conn = DriverManager.getConnection("jdbc:h2:mem:", "sa", "")) {
String createTable = "CREATE TABLE TEST_TABLE(ID INT, NAME VARCHAR(1024))";
conn.createStatement().executeUpdate(createTable);
String insertSql = "INSERT INTO TEST_TABLE VALUES(?, ?)";
try (PreparedStatement insertStmnt = conn.prepareStatement(insertSql)) {
for (int i = 0; i < 5; i++) {
insertStmnt.setInt(1, i);
insertStmnt.setString(2, "name " + i);
insertStmnt.executeUpdate();
}
}
}
}
}

GenericJDBCException: Cannot open connection ** Performance Metrics Report **

I have the following
javax.swing.Timer timer = new javax.swing.Timer(3000, new java.awt.event.ActionListener() {
#Override
public void actionPerformed(ActionEvent e) {
jLabelMsgL.setText("");
NitgenSwingWorker sWorker = new NitgenSwingWorker();
sWorker.execute();
}
});
timer.start();
private final class NitgenSwingWorker extends SwingWorker<Boolean, Void> {
#Override
protected Boolean doInBackground() throws Exception {
return nitgen.checkFinger();
}
#Override
protected void done() {
try {
Boolean isCheckFinger = get();
if(isCheckFinger){
delegate.getListaByIdEmpleado(123);
}else{
delegate.getListaByIdEmpleado(123);
}
} catch (InterruptedException | ExecutionException e) {
System.err.println("NitgenSwingWorker Error: " + e.getMessage());
}
}
}
but when 'isCheckFinger' is true, throws
Mon Jun 29 10:44:14 CDT 2015 INFO: ** Performance Metrics Report **
Longest reported query: 0 ms
Shortest reported query: 9223372036854775807 ms
Average query execution time: NaN ms
Number of statements executed: 0
Number of result sets created: 0
Number of statements prepared: 0
Number of prepared statement executions: 0
Mon Jun 29 10:44:14 CDT 2015 TRACE: send() packet payload:
0a 00 00 00 03 73 65 6c . . . . . s e l
65 63 74 20 31 3b e c t . 1 ;
org.hibernate.exception.GenericJDBCException: Cannot open connection
at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:103)
at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:91)
...
Caused by: java.sql.SQLException: Connections could not be acquired from the underlying database!
at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:106)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:529)
...
Caused by: com.mchange.v2.resourcepool.CannotAcquireResourceException: A ResourcePool could not acquire a resource from its primary factory or source.
at com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable(BasicResourcePool.java:1319)
at com.mchange.v2.resourcepool.BasicResourcePool.prelimCheckoutResource(BasicResourcePool.java:557)
...
only send me the error when nitgen.checkFinger returns true,
nitgen is a library of a fingerprint reader. I guess when this return
true, it changes something in swing and can not access hibernate.
The exception is sent until it try to do this:
getSession().beginTransaction()
Otherwise no problems, anyone can help me?
I solved
The problem was that I was using a JNI library, and when a value is changed, this happened. To fix and change the default value and ready solved.
public boolean checkFinger() throws Exception {
//Boolean thereIsFinger = Boolean.FALSE; //original line
Boolean thereIsFinger = Boolean.TRUE; //solution
//bsp is a JNI Library
bsp.CheckFinger(thereIsFinger);
return thereIsFinger;
}

Simple dbunit table comparison fails

Why doesn't this work? I'm trying to test that an empty database is the same before doing nothing as after doing nothing. In other words, this is the simplest dbunit test with a database that I can think of. And it doesn't work. The test methods are practically lifted from http://www.dbunit.org/howto.html
The error message I'm getting for comparing empty database is:
java.lang.AssertionError: expected:
org.dbunit.dataset.xml.FlatXmlDataSet<AbstractDataSet[_orderedTableNameMap=null]>
but was:
org.dbunit.database.DatabaseDataSet<AbstractDataSet[_orderedTableNameMap=null]>
The error message I'm getting for comparing empty table is:
java.lang.AssertionError: expected:
<org.dbunit.dataset.DefaultTable[_metaData=tableName=test, columns=[], keys=[], _rowList.size()=0]>
but was:
<org.dbunit.database.CachedResultSetTable[_metaData=table=test, cols=[(id, DOUBLE, noNulls), (txt, VARCHAR, nullable)], pk=[(id, DOUBLE, noNulls)], _rowList.size()=0]>
(I've added newlines for readability)
I can edit in the full stack trace (or anything else) if it'll be useful. Or you can browse through the public git repo: https://bitbucket.org/djeikyb/simple_dbunit
Do I need to somehow convert my actual IDataSet to xml then back to IDataSet to properly compare? What am I doing/expecting wrong?
34 public class TestCase
35 {
36
37 private IDatabaseTester database_tester;
38
39 #Before
40 public void setUp() throws Exception
41 {
42 database_tester = new JdbcDatabaseTester("com.mysql.jdbc.Driver",
43 "jdbc:mysql://localhost/cal",
44 "cal",
45 "cal");
46
47 IDataSet data_set = new FlatXmlDataSetBuilder().build(
48 new FileInputStream("src/simple_dbunit/dataset.xml"));
49 database_tester.setDataSet(data_set);
50
51 database_tester.onSetup();
52 }
53
54 #Test
55 public void testDbNoChanges() throws Exception
56 {
57 // expected
58 IDataSet expected_data_set = new FlatXmlDataSetBuilder().build(
59 new FileInputStream("src/simple_dbunit/dataset.xml"));
60
61 // actual
62 IDatabaseConnection connection = database_tester.getConnection();
63 IDataSet actual_data_set = connection.createDataSet();
64
65 // test
66 assertEquals(expected_data_set, actual_data_set);
67 }
68
69 #Test
70 public void testTableNoChanges() throws Exception
71 {
72 // expected
73 IDataSet expected_data_set = new FlatXmlDataSetBuilder().build(
74 new FileInputStream("src/simple_dbunit/dataset.xml"));
75 ITable expected_table = expected_data_set.getTable("test");
76
77 // actual
78 IDatabaseConnection connection = database_tester.getConnection();
79 IDataSet actual_data_set = connection.createDataSet();
80 ITable actual_table = actual_data_set.getTable("test");
81
82 // test
83 assertEquals(expected_table, actual_table);
84 }
85
86 }
When you compare the IDataSet and other DBUnit components, you have to use the assert method that provided by DBUnit.
If you use assert methods provided by JUnit, it will only be compared via equals method in Object
That's why you get the error complaining about different object type.

Categories

Resources