I am a beginner in this field, so I can not get a sense of it...
HBase ver: 0.98.24-hadoop2
Spark ver: 2.1.0
The following code try to put receiving data from Spark Streming-Kafka producer into HBase.
Kafka input data format is like this :
Line1,TAG1,123
Line1,TAG2,134
Spark-streaming process split the receiving line by delimiter ',' then put the data into HBase.
However, my application met an error when it call the htable.put() method.
Can any one help why the below code is throwing error?
Thank you.
JavaDStream<String> records = lines.flatMap(new FlatMapFunction<String, String>() {
private static final long serialVersionUID = 7113426295831342436L;
HTable htable;
public HTable set() throws IOException{
Configuration hconfig = HBaseConfiguration.create();
hconfig.set("hbase.zookeeper.property.clientPort", "2222");
hconfig.set("hbase.zookeeper.quorum", "127.0.0.1");
HConnection hconn = HConnectionManager.createConnection(hconfig);
htable = new HTable(hconfig, tableName);
return htable;
};
#Override
public Iterator<String> call(String x) throws IOException {
////////////// Put into HBase /////////////////////
String[] data = x.split(",");
if (null != data && data.length > 2 ){
SimpleDateFormat sdf = new SimpleDateFormat("yyyyMMddHHmmss");
String ts = sdf.format(new Date());
Put put = new Put(Bytes.toBytes(ts));
put.addImmutable(Bytes.toBytes(familyName), Bytes.toBytes("LINEID"), Bytes.toBytes(data[0]));
put.addImmutable(Bytes.toBytes(familyName), Bytes.toBytes("TAGID"), Bytes.toBytes(data[1]));
put.addImmutable(Bytes.toBytes(familyName), Bytes.toBytes("VAL"), Bytes.toBytes(data[2]));
/*I've checked data passed like this
{"totalColumns":3,"row":"20170120200927",
"families":{"TAGVALUE":
[{"qualifier":"LINEID","vlen":3,"tag[], "timestamp":9223372036854775807},
{"qualifier":"TAGID","vlen":3,"tag":[],"timestamp":9223372036854775807},
{"qualifier":"VAL","vlen":6,"tag" [],"timestamp":9223372036854775807}]}}*/
//********************* ERROR *******************//
htable.put(put);
htable.close();
}
return Arrays.asList(COLDELIM.split(x)).iterator();
}
});
ERRO Code :
Exception in thread "main" org.apache.spark.SparkException: Job
aborted due to stage failure: Task 0 in stage 23.0 failed 1 times, most recent failure: Lost task 0.0 in stage 23.0 (TID 23, localhost, executor driver): java.lang.NullPointerException
at org.test.avro.sparkAvroConsumer$2.call(sparkAvroConsumer.java:154)
at org.test.avro.sparkAvroConsumer$2.call(sparkAvroConsumer.java:123)
at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$fn$1$1.apply(JavaDStreamLike.scala:171)
at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$fn$1$1.apply(JavaDStreamLike.scala:171)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:389)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
at scala.collection.AbstractIterator.to(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1353)
at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1353)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
you are not calling this method public HTable set() throws IOException
which returns htable instance.
Since htable instance is null and you are trying to do operation on null
htable.put()
you are getting NPE like below
stage 23.0 failed 1 times, most recent failure: Lost task 0.0 in stage 23.0 (TID 23, localhost, executor driver): java.lang.NullPointerException
Related
I would like to do a simple Spark SQL code that reads a file called u.data, that contains the movie ratings, creates a Dataset of Rows, and then print the first rows of the Dataset.
I've had as premise read the file to a JavaRDD, and map the RDD according to a ratingsObject(the object has two parameters, movieID and rating). So I just want to print the first Rows in this Dataset.
I'm using Java language and Spark SQL.
public static void main(String[] args){
App obj = new App();
SparkSession spark = SparkSession.builder().appName("Java Spark SQL basic example").getOrCreate();
Map<Integer,String> movieNames = obj.loadMovieNames();
JavaRDD<String> lines = spark.read().textFile("hdfs:///ml-100k/u.data").javaRDD();
JavaRDD<MovieRatings> movies = lines.map(line -> {
String[] parts = line.split(" ");
MovieRatings ratingsObject = new MovieRatings();
ratingsObject.setMovieID(Integer.parseInt(parts[1].trim()));
ratingsObject.setRating(Integer.parseInt(parts[2].trim()));
return ratingsObject;
});
Dataset<Row> movieDataset = spark.createDataFrame(movies, MovieRatings.class);
Encoder<Integer> intEncoder = Encoders.INT();
Dataset<Integer> HUE = movieDataset.map(
new MapFunction<Row, Integer>(){
private static final long serialVersionUID = -5982149277350252630L;
#Override
public Integer call(Row row) throws Exception{
return row.getInt(0);
}
}, intEncoder
);
HUE.show();
//stop the session
spark.stop();
}
I've tried a lot of possible solutions that I found, but all of them got the same error:
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, localhost, executor 1): java.lang.ArrayIndexOutOfBoundsException: 1
at com.ericsson.SparkMovieRatings.App.lambda$main$1e634467$1(App.java:63)
at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1040)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:253)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
And here is the sample of the u.data file:
196 242 3 881250949
186 302 3 891717742
22 377 1 878887116
244 51 2 880606923
166 346 1 886397596
298 474 4 884182806
115 265 2 881171488
253 465 5 891628467
305 451 3 886324817
6 86 3 883603013
62 257 2 879372434
286 1014 5 879781125
200 222 5 876042340
210 40 3 891035994
224 29 3 888104457
303 785 3 879485318
122 387 5 879270459
194 274 2 879539794
Where the first column represents de UserID, the second MovieID, the third the rating,and the last one is the timestamp.
As mentioned before your data are not space separated.
I'll show you two possible solutions, the first one based on RDD and the second one based on spark sql which is, in general, the best solution in term of performance.
RDD (you should use built in types to reduce the overhead):
public class SparkDriver {
public static void main (String args[]) {
// Create a configuration object and set the name of
// the application
SparkConf conf = new SparkConf().setAppName("application_name");
// Create a spark Context object
JavaSparkContext context = new JavaSparkContext(conf);
// Create final rdd (suppose you have a text file)
JavaPairRDD<Integer,Integer> movieRatingRDD =
contextFile("u.data.txt")
.mapToPair(line -> {(
String[] tokens = line.split("\\s+");
int movieID = Integer.parseInt(tokens[0]);
int rating = Integer.parseInt(tokens[1]);
return new Tuple2<Integer, Integer>(movieID, rating);});
// Keep in mind that take operation takes the first n elements
// and the order is the order of the file.
ArrayList<Tuple2<Integer, Integer> list = new ArrayList<>(movieRatingRDD.take(10));
System.out.println("MovieID\tRating");
for(tuple : list) {
System.out.println(tuple._1 + "\t" + tuple._2);
}
context.close();
}}
SQL
public class SparkDriver {
public static void main(String[] args) {
// Create spark session
SparkSession session = SparkSession.builder().appName("[Spark app sql version]").getOrCreate();
Dataset<MovieRatings> personsDataframe = session.read()
.format("tct")
.option("header", false)
.option("inferSchema", true)
.option("delimiter", "\\s+")
.load("u.data.txt")
.map(row -> {
int movieID = row.getInteger(0);
int rating = row.getInteger(1);
return new MovieRatings(movieID, rating);
}).as(Encoders.bean(MovieRatings.class);
// Stop session
session.stop();
}
}
I am getting this error on insert in java. Is there a way to prepare
the insert for the driver error?
Error:
Exception in thread "main" com.datastax.driver.core.exceptions.InvalidQueryException: Expected 4 or 0 byte int (10)
List<Flight> flightList = ProcessFlightsCSV.processFlights("flights_from_pg.csv");
for (Flight flight : flightList) {
System.out.println(flight);
Insert query = QueryBuilder.insertInto("flights")
.value("id", flight.getId())
.value("year", flight.getYear())
.value("fl_date", flight.getFlDate())
.value("airline_id", flight.getAirlineId())
.value("carrier", flight.getCarrier())
.value("fl_num", flight.getFlNum())
.value("origin_airport_id", flight.getOriginAirportId())
.value("origin", flight.getOrigin())
.value("origin_city_name", flight.getOriginCityName())
.value("origin_state_abr", flight.getOriginStateAbr())
.value("dest", flight.getDest())
.value("day_of_month", flight.getDayOfMonth())
.value("dest_city_name", flight.getDestCityName())
.value("dest_state_abr", flight.getDestStateAbr())
.value("dep_time", flight.getDepTime())
.value("arr_time", flight.getArrTime())
.value("distance", flight.getDistance());
session.execute(query);
}
Hopefully you have proper session before executing this query.
Update your session.execute(query.toString());
I accessed MySQL database and fetched the table.
Everything is working fine till that.
when i am trying to save the records in text or other formats i am getting the error
Exit Code Exception exit Code=1: 'Change File Mode By Mask error' (5): Access is denied.
Any help will be appreciated.
object jdbcConnect {
def main(args: Array[String]) {
val url="jdbc:mysql://127.0.0.1:3306/mydb"
val username = "root"
val password = "token_password"
Class.forName("com.mysql.jdbc.Driver").newInstance
//DriverManager.registerDriver(new com.mysql.jdbc.Driver());
val conf = new SparkConf().setAppName("JDB CRDD").setMaster("local[2]").set("spark.executor.memory", "1g")
val sc = new SparkContext(conf)
val myRDD = new JdbcRDD( sc, () =>
DriverManager.getConnection(url,username,password) ,
"select s_Id,issue_date from store_details limit ?, ?",
0, 10, 1, r => r.getString("s_Id") + ", " + r.getString("issue_date"))
myRDD.foreach(println)
myRDD.saveAsTextFile("C:/jdbcrddexamplee")
}
}
Error
17/07/18 11:10:19 ERROR Executor: Exception in task 0.0 in stage 2.0
(TID 2) ExitCodeException exitCode=1: ChangeFileModeByMask error (5):
Access is denied.
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582) at
org.apache.hadoop.util.Shell.run(Shell.java:479) at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:866) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:849) at
org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733)
at
org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.(RawLocalFileSystem.java:225)
at
org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.(RawLocalFileSystem.java:209)
at
org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307)
at
org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:296)
at
org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328)
It seemed to be a permission error. My foolishness...
Make sure to run anything as an admin. Though i will suggest to use dataframe instead of RDD :D
I tried to connect to a mysql database, to fetch table records. I can establish the connection, and 10 records are fetched as well, but then suddenly the code crashes. I don't know why. PS: i am new to scala... Any help would be appreciated.
object jdbcConnect {
def main(args: Array[String]) {
val url="jdbc:mysql://127.0.0.1:3306/mydb"
val username = "root"
val password = "token_password"
Class.forName("com.mysql.jdbc.Driver").newInstance
//DriverManager.registerDriver(new com.mysql.jdbc.Driver());
val conf = new SparkConf().setAppName("JDBC RDD").setMaster("local[2]").set("spark.executor.memory", "1g")
val sc = new SparkContext(conf)
val myRDD = new JdbcRDD( sc, () => DriverManager.getConnection(url,username,password) ,
"select s_Id,issue_date from store_details limit ?, ?",
0, 10, 1, r => r.getString("s_Id") + ", " + r.getString("issue_date"))
myRDD.foreach(println)
myRDD.saveAsTextFile("C:/jdbcrddexamplee")
}
}
ERROR
17/07/16 02:32:24 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
ExitCodeException exitCode=1: ChangeFileModeByMask error (5): Access is denied. at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.(RawLocalFileSystem.java:225)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.(RawLocalFileSystem.java:209)
at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:296)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.(ChecksumFileSystem.java:398)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:461)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:804)
It seemed to be a permission error.
My foolishness...
Make sure to run anything as an admin.
Though i will suggest to use dataframe instead of RDD :D
Thanks
I'm working with GraphX and SparkSQL and I'm trying to create DataFrame (Dataset) in a graph node. To create a DataFrame I need the SparkSession (spark.createDataFrame(rows,schema)).All I try, I get an error. This is my Code:
SparkSession spark = SparkSession.builder()
.master("spark://home:7077")
.appName("testgraph")
.getOrCreate();
JavaSparkContext sc = new JavaSparkContext(spark.sparkContext());
//read tree File
JavaRDD<String> tree_file = sc.textFile(args[1]);
JavaPairRDD<String[],Long> node_pair = tree_file.map(l-> l.split(" ")).zipWithIndex();
//Create vertex
RDD<Tuple2<Object, Tuple2<Dataset<Row>,Clauses>>> verteces = node_pair.map(t-> {
List<StructField> fields = new ArrayList<StructField>();
List<Row> rows = new ArrayList<>();
String[] vars = Arrays.copyOfRange(t._1(), 2,t._1().length);
for (int i = 0; i < vars.length; i++) {
fields.add(DataTypes.createStructField(vars[i], DataTypes.BooleanType, true));
}
StructType schema = DataTypes.createStructType(fields);
Dataset<Row> ds = spark.createDataFrame(rows,schema);
return new Tuple2<>((Object)(t._2+1),ds);
}).rdd();
This is the Error I'm getting:
16/08/23 15:25:36 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 3, 192.168.1.5): java.lang.NullPointerException
at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:112)
at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:110)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:63)
at org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:328)
at Main.lambda$main$e7daa47c$1(Main.java:62)
at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1028)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:148)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I also tried to get the session inside map() with:
SparkSession ss = SparkSession.builder()
.master("spark://home:7077")
.appName("testgraph")
.getOrCreate();
I also get a Error:
16/08/23 15:00:29 WARN TaskSetManager: Lost task 0.0 in stage 4.0 (TID 7, 192.168.1.5): java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
at org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I hope someone can help me. I cant find a solution.
THANKS!