I'm working with GraphX and SparkSQL and I'm trying to create DataFrame (Dataset) in a graph node. To create a DataFrame I need the SparkSession (spark.createDataFrame(rows,schema)).All I try, I get an error. This is my Code:
SparkSession spark = SparkSession.builder()
.master("spark://home:7077")
.appName("testgraph")
.getOrCreate();
JavaSparkContext sc = new JavaSparkContext(spark.sparkContext());
//read tree File
JavaRDD<String> tree_file = sc.textFile(args[1]);
JavaPairRDD<String[],Long> node_pair = tree_file.map(l-> l.split(" ")).zipWithIndex();
//Create vertex
RDD<Tuple2<Object, Tuple2<Dataset<Row>,Clauses>>> verteces = node_pair.map(t-> {
List<StructField> fields = new ArrayList<StructField>();
List<Row> rows = new ArrayList<>();
String[] vars = Arrays.copyOfRange(t._1(), 2,t._1().length);
for (int i = 0; i < vars.length; i++) {
fields.add(DataTypes.createStructField(vars[i], DataTypes.BooleanType, true));
}
StructType schema = DataTypes.createStructType(fields);
Dataset<Row> ds = spark.createDataFrame(rows,schema);
return new Tuple2<>((Object)(t._2+1),ds);
}).rdd();
This is the Error I'm getting:
16/08/23 15:25:36 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 3, 192.168.1.5): java.lang.NullPointerException
at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:112)
at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:110)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:63)
at org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:328)
at Main.lambda$main$e7daa47c$1(Main.java:62)
at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1028)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:148)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I also tried to get the session inside map() with:
SparkSession ss = SparkSession.builder()
.master("spark://home:7077")
.appName("testgraph")
.getOrCreate();
I also get a Error:
16/08/23 15:00:29 WARN TaskSetManager: Lost task 0.0 in stage 4.0 (TID 7, 192.168.1.5): java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
at org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I hope someone can help me. I cant find a solution.
THANKS!
Related
I created sparkStreaming Simulation for my tutorial. When I do the outputMode ("complete") operation, I get an error.
ERROR:
Exception in thread "main" org.apache.spark.sql.AnalysisException: Complete output mode not supported when there are no streaming aggregations on streaming DataFrames/Datasets;
My dataset example:
2006-04-01 00:00:00.000 +0200,Partly Cloudy,rain,9.472222222222221,7.3888888888888875,0.89,14.1197,251.0,15.826300000000002,0.0,1015.13,Partly cloudy throughout the day.
First process code (Partition(summary)):
System.setProperty("hadoop.home.dir","C:\\hadoop-common-2.2.0-bin-master");
SparkSession sparkSession = SparkSession.builder()
.appName("SparkStreamingMessageListener")
.master("local")
.getOrCreate();
StructType schema = new StructType()
.add("Formatted Date", "String")
.add("Summary","String")
.add("Precip Type", "String")
.add("Temperature", "Double")
.add("Apparent Temperature", "Double")
.add("Humidity","Double")
.add("Wind Speed (km/h)","Double")
.add("Wind Bearing (degrees)","Double")
.add("Visibility (km)","Double")
.add("Loud Cover","Double")
.add("Pressure(milibars)","Double")
.add("Dailiy Summary","String");
Dataset<Row> formatted_date = sparkSessionDataFrame.read().schema(schema).option("header", true).csv("C:\\Users\\Kaan\\Desktop\\Kaan Proje\\SparkStreamingListener\\archivecsv\\weatherHistory.csv");
Dataset<Row> avg = formatted_date.groupBy("Summary", "Precip Type").avg("Temperature").sort(functions.desc("avg(Temperature)"));
formatted_date.write().partitionBy("Summary").csv("C:\\Users\\Kaan\\Desktop\\Kaan Proje\\SparkStreamingListener\\archivecsv\\weatherHistoryFile\\");
Second listener process code:
SparkSession sparkSession = SparkSession.builder()
.appName("SparkStreamingMessageListener1")
.master("local")
.getOrCreate();
StructType schema1 = new StructType()
.add("Formatted Date", "String")
.add("Precip Type", "String")
.add("Temperature", "Double")
.add("Apparent Temperature", "Double")
.add("Humidity","Double")
.add("Wind Speed (km/h)","Double")
.add("Wind Bearing (degrees)","Double")
.add("Visibility (km)","Double")
.add("Loud Cover","Double")
.add("Pressure(milibars)","Double")
.add("Dailiy Summary","String");
Dataset<Row> rawData = sparkSession.readStream().schema(schema1).option("sep", ",").csv("C:\\Users\\Kaan\\Desktop\\Kaan Proje\\sparkStreamingWheather\\*");
Dataset<Row> heatData = rawData.select("Temperature", "Precip Type").where("Temperature>10");
StreamingQuery start = heatData.writeStream().outputMode("complete").format("console").start();
start.awaitTermination();
I created a Streaming simulation by copying the partitioned files to the specified Listener file path.
I would be glad if you help.Thanks.
The error is pretty specific in telling what the actual problem is: the output mode complete is not supported for the type of your query.
As stated in the Structured Streaming Guide on OutputeModes:
"Complete mode not supported as it is infeasible to keep all unaggregated data in the Result Table."
This issue will be solved when selecting the append mode:
StreamingQuery start = heatData.writeStream().outputMode("append").format("console").start()
I want to read a data from kafka topic and group by key values, and write into text files..
public static void main(String[] args) throws Exception {
SparkSession spark=SparkSession
.builder()
.appName("Sparkconsumer")
.master("local[*]")
.getOrCreate();
SQLContext sqlContext = spark.sqlContext();
SparkContext context = spark.sparkContext();
Dataset<Row>lines=spark
.readStream()
.format("kafka")
.option("kafka.bootstrap.servers", "localhost:9092")
.option("subscribe","test-topic")
.load();
Dataset<Row> r= lines.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)");
r.printSchema();
r.createOrReplaceTempView("basicView");
sqlContext.sql("select * from basicView")
.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
.writeStream()
.outputMode("append")
.format("console")
.option("path","usr//path")
.start()
.awaitTermination();
Following points are misleading in your code:
To read from Kafka and write into a file, you would not need SparkContext or SQLContext,
You are casting your key and value twice into a string,
the format of your output query should not be console if you want to store the data into a file.
An example can be looked up in the Spark Structured Streaming + Kafka Integration Guide and the Spark Structured Streaming Programming Guide
public static void main(String[] args) throws Exception {
SparkSession spark = SparkSession
.builder()
.appName("Sparkconsumer")
.master("local[*]")
.getOrCreate();
Dataset<Row> lines = spark
.readStream()
.format("kafka")
.option("kafka.bootstrap.servers", "localhost:9092")
.option("subscribe","test-topic")
.load();
Dataset<Row> r = lines
.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
// do some more processing such as 'groupBy'
;
r.writeStream
.format("parquet") // can be "orc", "json", "csv", etc.
.outputMode("append")
.option("path", "path/to/destination/dir")
.option("checkpointLocation", "/path/to/checkpoint/dir")
.start()
.awaitTermination();
I would like to do a simple Spark SQL code that reads a file called u.data, that contains the movie ratings, creates a Dataset of Rows, and then print the first rows of the Dataset.
I've had as premise read the file to a JavaRDD, and map the RDD according to a ratingsObject(the object has two parameters, movieID and rating). So I just want to print the first Rows in this Dataset.
I'm using Java language and Spark SQL.
public static void main(String[] args){
App obj = new App();
SparkSession spark = SparkSession.builder().appName("Java Spark SQL basic example").getOrCreate();
Map<Integer,String> movieNames = obj.loadMovieNames();
JavaRDD<String> lines = spark.read().textFile("hdfs:///ml-100k/u.data").javaRDD();
JavaRDD<MovieRatings> movies = lines.map(line -> {
String[] parts = line.split(" ");
MovieRatings ratingsObject = new MovieRatings();
ratingsObject.setMovieID(Integer.parseInt(parts[1].trim()));
ratingsObject.setRating(Integer.parseInt(parts[2].trim()));
return ratingsObject;
});
Dataset<Row> movieDataset = spark.createDataFrame(movies, MovieRatings.class);
Encoder<Integer> intEncoder = Encoders.INT();
Dataset<Integer> HUE = movieDataset.map(
new MapFunction<Row, Integer>(){
private static final long serialVersionUID = -5982149277350252630L;
#Override
public Integer call(Row row) throws Exception{
return row.getInt(0);
}
}, intEncoder
);
HUE.show();
//stop the session
spark.stop();
}
I've tried a lot of possible solutions that I found, but all of them got the same error:
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, localhost, executor 1): java.lang.ArrayIndexOutOfBoundsException: 1
at com.ericsson.SparkMovieRatings.App.lambda$main$1e634467$1(App.java:63)
at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1040)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:253)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
And here is the sample of the u.data file:
196 242 3 881250949
186 302 3 891717742
22 377 1 878887116
244 51 2 880606923
166 346 1 886397596
298 474 4 884182806
115 265 2 881171488
253 465 5 891628467
305 451 3 886324817
6 86 3 883603013
62 257 2 879372434
286 1014 5 879781125
200 222 5 876042340
210 40 3 891035994
224 29 3 888104457
303 785 3 879485318
122 387 5 879270459
194 274 2 879539794
Where the first column represents de UserID, the second MovieID, the third the rating,and the last one is the timestamp.
As mentioned before your data are not space separated.
I'll show you two possible solutions, the first one based on RDD and the second one based on spark sql which is, in general, the best solution in term of performance.
RDD (you should use built in types to reduce the overhead):
public class SparkDriver {
public static void main (String args[]) {
// Create a configuration object and set the name of
// the application
SparkConf conf = new SparkConf().setAppName("application_name");
// Create a spark Context object
JavaSparkContext context = new JavaSparkContext(conf);
// Create final rdd (suppose you have a text file)
JavaPairRDD<Integer,Integer> movieRatingRDD =
contextFile("u.data.txt")
.mapToPair(line -> {(
String[] tokens = line.split("\\s+");
int movieID = Integer.parseInt(tokens[0]);
int rating = Integer.parseInt(tokens[1]);
return new Tuple2<Integer, Integer>(movieID, rating);});
// Keep in mind that take operation takes the first n elements
// and the order is the order of the file.
ArrayList<Tuple2<Integer, Integer> list = new ArrayList<>(movieRatingRDD.take(10));
System.out.println("MovieID\tRating");
for(tuple : list) {
System.out.println(tuple._1 + "\t" + tuple._2);
}
context.close();
}}
SQL
public class SparkDriver {
public static void main(String[] args) {
// Create spark session
SparkSession session = SparkSession.builder().appName("[Spark app sql version]").getOrCreate();
Dataset<MovieRatings> personsDataframe = session.read()
.format("tct")
.option("header", false)
.option("inferSchema", true)
.option("delimiter", "\\s+")
.load("u.data.txt")
.map(row -> {
int movieID = row.getInteger(0);
int rating = row.getInteger(1);
return new MovieRatings(movieID, rating);
}).as(Encoders.bean(MovieRatings.class);
// Stop session
session.stop();
}
}
I am a beginner in this field, so I can not get a sense of it...
HBase ver: 0.98.24-hadoop2
Spark ver: 2.1.0
The following code try to put receiving data from Spark Streming-Kafka producer into HBase.
Kafka input data format is like this :
Line1,TAG1,123
Line1,TAG2,134
Spark-streaming process split the receiving line by delimiter ',' then put the data into HBase.
However, my application met an error when it call the htable.put() method.
Can any one help why the below code is throwing error?
Thank you.
JavaDStream<String> records = lines.flatMap(new FlatMapFunction<String, String>() {
private static final long serialVersionUID = 7113426295831342436L;
HTable htable;
public HTable set() throws IOException{
Configuration hconfig = HBaseConfiguration.create();
hconfig.set("hbase.zookeeper.property.clientPort", "2222");
hconfig.set("hbase.zookeeper.quorum", "127.0.0.1");
HConnection hconn = HConnectionManager.createConnection(hconfig);
htable = new HTable(hconfig, tableName);
return htable;
};
#Override
public Iterator<String> call(String x) throws IOException {
////////////// Put into HBase /////////////////////
String[] data = x.split(",");
if (null != data && data.length > 2 ){
SimpleDateFormat sdf = new SimpleDateFormat("yyyyMMddHHmmss");
String ts = sdf.format(new Date());
Put put = new Put(Bytes.toBytes(ts));
put.addImmutable(Bytes.toBytes(familyName), Bytes.toBytes("LINEID"), Bytes.toBytes(data[0]));
put.addImmutable(Bytes.toBytes(familyName), Bytes.toBytes("TAGID"), Bytes.toBytes(data[1]));
put.addImmutable(Bytes.toBytes(familyName), Bytes.toBytes("VAL"), Bytes.toBytes(data[2]));
/*I've checked data passed like this
{"totalColumns":3,"row":"20170120200927",
"families":{"TAGVALUE":
[{"qualifier":"LINEID","vlen":3,"tag[], "timestamp":9223372036854775807},
{"qualifier":"TAGID","vlen":3,"tag":[],"timestamp":9223372036854775807},
{"qualifier":"VAL","vlen":6,"tag" [],"timestamp":9223372036854775807}]}}*/
//********************* ERROR *******************//
htable.put(put);
htable.close();
}
return Arrays.asList(COLDELIM.split(x)).iterator();
}
});
ERRO Code :
Exception in thread "main" org.apache.spark.SparkException: Job
aborted due to stage failure: Task 0 in stage 23.0 failed 1 times, most recent failure: Lost task 0.0 in stage 23.0 (TID 23, localhost, executor driver): java.lang.NullPointerException
at org.test.avro.sparkAvroConsumer$2.call(sparkAvroConsumer.java:154)
at org.test.avro.sparkAvroConsumer$2.call(sparkAvroConsumer.java:123)
at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$fn$1$1.apply(JavaDStreamLike.scala:171)
at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$fn$1$1.apply(JavaDStreamLike.scala:171)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:389)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
at scala.collection.AbstractIterator.to(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1353)
at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1353)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
you are not calling this method public HTable set() throws IOException
which returns htable instance.
Since htable instance is null and you are trying to do operation on null
htable.put()
you are getting NPE like below
stage 23.0 failed 1 times, most recent failure: Lost task 0.0 in stage 23.0 (TID 23, localhost, executor driver): java.lang.NullPointerException
I am relatively new to scala and to couchbase, but I need to learn both fast. Recently while trying to run a sample application using the Java couchbase SDK through Scala I have run into the following problem.
[cb-core-3-2] WARN com.couchbase.client.core.CouchbaseCore - Exception while Handling Request Events RequestEvent{request=null}
java.lang.IndexOutOfBoundsException: Index: 1854, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at com.couchbase.client.core.config.DefaultCouchbaseBucketConfig.nodeIndexForMaster(DefaultCouchbaseBucketConfig.java:135)
at com.couchbase.client.core.node.locate.KeyValueLocator.calculateNodeId(KeyValueLocator.java:165)
at com.couchbase.client.core.node.locate.KeyValueLocator.locateForCouchbaseBucket(KeyValueLocator.java:124)
at com.couchbase.client.core.node.locate.KeyValueLocator.locateAndDispatch(KeyValueLocator.java:84)
at com.couchbase.client.core.RequestHandler.dispatchRequest(RequestHandler.java:219)
at com.couchbase.client.core.RequestHandler.onEvent(RequestHandler.java:176)
at com.couchbase.client.core.RequestHandler.onEvent(RequestHandler.java:71)
at com.couchbase.client.deps.com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.couchbase.client.deps.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
[error] (run-main-0) java.lang.RuntimeException: java.util.concurrent.TimeoutException
java.lang.RuntimeException: java.util.concurrent.TimeoutException
at com.couchbase.client.java.util.Blocking.blockForSingle(Blocking.java:71)
at com.couchbase.client.java.CouchbaseBucket.upsert(CouchbaseBucket.java:354)
at com.couchbase.client.java.CouchbaseBucket.upsert(CouchbaseBucket.java:349)
at App$.main(Application.scala:28)
at App.main(Application.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
Caused by: java.util.concurrent.TimeoutException
at com.couchbase.client.java.util.Blocking.blockForSingle(Blocking.java:71)
at com.couchbase.client.java.CouchbaseBucket.upsert(CouchbaseBucket.java:354)
at com.couchbase.client.java.CouchbaseBucket.upsert(CouchbaseBucket.java:349)
at App$.main(Application.scala:28)
at App.main(Application.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
[trace] Stack trace suppressed: run last compile:run for the full output.
[cb-core-3-1] WARN com.couchbase.client.core.CouchbaseCore - Exception while Handling Response Events null
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
at com.couchbase.client.deps.com.lmax.disruptor.BlockingWaitStrategy.waitFor(BlockingWaitStrategy.java:45)
at com.couchbase.client.deps.com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
at com.couchbase.client.deps.com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:124)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.couchbase.client.deps.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
[cb-core-3-2] WARN com.couchbase.client.core.CouchbaseCore - Exception while Handling Request Events RequestEvent{request=null}
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
at com.couchbase.client.deps.com.lmax.disruptor.BlockingWaitStrategy.waitFor(BlockingWaitStrategy.java:45)
at com.couchbase.client.deps.com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
at com.couchbase.client.deps.com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:124)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.couchbase.client.deps.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
And this is the code that generated the error
import com.couchbase.client.java._
import com.couchbase.client.core.time._
import com.couchbase.client.java.document._
import com.couchbase.client.java.document.json._
import com.couchbase.client.java.query._
import com.couchbase.client.java.env.DefaultCouchbaseEnvironment
object App {
def main(args: Array[String]): Unit = {
// Initialize the Connection
// Connects to localhost
val env = DefaultCouchbaseEnvironment.builder()
.connectTimeout(5000)
.bootstrapCarrierEnabled(false)
.build()
val cluster = CouchbaseCluster.create(env, "127.0.0.1")
// Opens the "default" bucket
val bucket = cluster.openBucket("default")
// Create a JSON Document
val user: JsonObject = JsonObject.create()
.put("firstname", "Walter")
.put("lastname", "White")
.put("job", "chemistry teacher")
.put("age", 50)
val stored: JsonDocument = bucket.upsert(JsonDocument.create("walter", user));
// Load the Document and print it
// Prints Content and Metadata of the stored Document
println(bucket.get("walter"))
// Just close a single bucket
bucket.close();
// Disconnect and close all buckets
cluster.disconnect();
}
}
EDIT:
I am a little new to this but here is what I managed to get out of the debugger.
this = {DefaultCouchbaseBucketConfig#2385} "DefaultCouchbaseBucketConfig{name='testBucket', locator=VBUCKET, uri='/pools/default/buckets/testBucket?bucket_uuid=54c1356c57dea1d640837c678f87d5e4', streamingUri='/pools/default/bucketsStreaming/testBucket?bucket_uuid=54c1356c57dea1d640837c678f87d5e4', nodeInfo=[NodeInfo{, hostname=localhost/127.0.0.1, configPort=0, directServices={CONFIG=8091, QUERY=8093, VIEW=8092, BINARY=11210}, sslServices={CONFIG=18091, QUERY=18093, VIEW=18092, BINARY=11207}}], partitionInfo=PartitionInfo{numberOfReplicas=1, partitionHosts=[localhost], partitions=[], tainted=false}, tainted=false, rev=23}"
partitionInfo = {CouchbasePartitionInfo#2391} "PartitionInfo{numberOfReplicas=1, partitionHosts=[localhost], partitions=[], tainted=false}"
numberOfReplicas = 1
partitionHosts = {String[1]#2428}
partitions = {ArrayList#2390} size = 0
forwardPartitions = null
tainted = false
partitionHosts = {ArrayList#2396} size = 1
0 = {DefaultNodeInfo#2425} "NodeInfo{, hostname=localhost/127.0.0.1, configPort=8091, directServices={CONFIG=8091, BINARY=11210, VIEW=8092}, sslServices={}}"
nodesWithPrimaryPartitions = {HashSet#2397} size = 0
tainted = false
rev = 23
name = "testBucket"
value = {char[10]#2422}
hash = 1241531676
password = ""
value = {char[0]#2421}
hash = 0
locator = {BucketNodeLocator#2400} "VBUCKET"
name = "VBUCKET"
ordinal = 0
uri = "/pools/default/buckets/testBucket?bucket_uuid=54c1356c57dea1d640837c678f87d5e4"
value = {char[78]#2411}
hash = 0
streamingUri = "/pools/default/bucketsStreaming/testBucket?bucket_uuid=54c1356c57dea1d640837c678f87d5e4"
value = {char[87]#2420}
hash = 0
nodeInfo = {ArrayList#2403} size = 1
0 = {DefaultNodeInfo#2408} "NodeInfo{, hostname=localhost/127.0.0.1, configPort=0, directServices={CONFIG=8091, QUERY=8093, VIEW=8092, BINARY=11210}, sslServices={CONFIG=18091, QUERY=18093, VIEW=18092, BINARY=11207}}"
enabledServices = 15
partition = 7620
useFastForward = false
EDIT 2:
I had a look at the Couchbase Console log and I am constantly getting the following
Service 'memcached' exited with status 1. Restarting. Messages: Failed to open library "/Users/luishreis/Downloads/couchbase-server-enterprise_4/Couchbase Server.app/Contents/Resources/couchbase-core/lib/memcached/stdin_term_handler.so": dlopen(/Users/luishreis/Downloads/couchbase-server-enterprise_4/Couchbase Server.app/Contents/Resources/couchbase-core/lib/memcached/stdin_term_handler.dylib, 6): image not found
Unable to load extension /Users/luishreis/Downloads/couchbase-server-enterprise_4/Couchbase Server.app/Contents/Resources/couchbase-core/lib/memcached/stdin_term_handler.so using the config
Any help on the matter would be appreciated.
Many thanks.