Pivoting DataFrame - Spark SQL - java

I have a DataFrame containing below:
TradeId|Source
ABC|"USD,333.123,20170605|USD,-789.444,20170605|GBP,1234.567,20150602"
I want to pivot this data so it turns into below
TradeId|CCY|PV
ABC|USD|333.123
ABC|USD|-789.444
ABC|GBP|1234.567
The number of CCY|PV|Date triplets in the column "Source" is not fixed. I could do it in ArrayList but that requires to load the data in JVM and defeats the whole point of Spark.
Lets say my DataFrame looks as below:
DataFrame tradesSnap = this.loadTradesSnap(reportRequest);
String tempTable = getTempTableName();
tradesSnap.registerTempTable(tempTable);
tradesSnap = tradesSnap.sqlContext().sql("SELECT TradeId, Source FROM " + tempTable);

If you read databricks pivot, it says " A pivot is an aggregation where one (or more in the general case) of the grouping columns has its distinct values transposed into individual columns." And this is not what you desire I guess
I would suggest you to use withColumn and functions to get the final output you desire. You can do as following considering dataframe is what you have
+-------+----------------------------------------------------------------+
|TradeId|Source |
+-------+----------------------------------------------------------------+
|ABC |USD,333.123,20170605|USD,-789.444,20170605|GBP,1234.567,20150602|
+-------+----------------------------------------------------------------+
You can do the following using explode, split and withColumn to get the desired output
val explodedDF = dataframe.withColumn("Source", explode(split(col("Source"), "\\|")))
val finalDF = explodedDF.withColumn("CCY", split($"Source", ",")(0))
.withColumn("PV", split($"Source", ",")(1))
.withColumn("Date", split($"Source", ",")(2))
.drop("Source")
finalDF.show(false)
The final output is
+-------+---+--------+--------+
|TradeId|CCY|PV |Date |
+-------+---+--------+--------+
|ABC |USD|333.123 |20170605|
|ABC |USD|-789.444|20170605|
|ABC |GBP|1234.567|20150602|
+-------+---+--------+--------+
I hope this solves your issue

Rather than pivoting, what you are trying to achieve looks more like flatMap.
To put it simply, by using flatMap on a Dataset you apply to each row a function (map) that itself would produce a sequence of rows. Each set of rows is then concatenated into a single sequence (flat).
The following program shows the idea:
import org.apache.spark.sql.SparkSession
case class Input(TradeId: String, Source: String)
case class Output(TradeId: String, CCY: String, PV: String, Date: String)
object FlatMapExample {
// This function will produce more rows of output for each line of input
def splitSource(in: Input): Seq[Output] =
in.Source.split("\\|", -1).map {
source =>
println(source)
val Array(ccy, pv, date) = source.split(",", -1)
Output(in.TradeId, ccy, pv, date)
}
def main(args: Array[String]): Unit = {
// Initialization and loading
val spark = SparkSession.builder().master("local").appName("pivoting-example").getOrCreate()
import spark.implicits._
val input = spark.read.options(Map("sep" -> "|", "header" -> "true")).csv(args(0)).as[Input]
// For each line in the input, split the source and then
// concatenate each "sub-sequence" in a single `Dataset`
input.flatMap(splitSource).show
}
}
Given your input, this would be the output:
+-------+---+--------+--------+
|TradeId|CCY| PV| Date|
+-------+---+--------+--------+
| ABC|USD| 333.123|20170605|
| ABC|USD|-789.444|20170605|
| ABC|GBP|1234.567|20150602|
+-------+---+--------+--------+
You can now take the result and save it to a CSV, if you want.

Related

how to populate select clause of dataframe dynamically? giving AnalysisException

I am Using spark-sql 2.4.1 and java 8.
val country_df = Seq(
("us",2001),
("fr",2002),
("jp",2002),
("in",2001),
("fr",2003),
("jp",2002),
("in",2003)
).toDF("country","data_yr")
> val col_df = country_df.select("country").where($"data_yr" === 2001)
val data_df = Seq(
("us_state_1","fr_state_1" ,"in_state_1","jp_state_1"),
("us_state_2","fr_state_2" ,"in_state_2","jp_state_1"),
("us_state_3","fr_state_3" ,"in_state_3","jp_state_1")
).toDF("us","fr","in","jp")
> data_df.select("us","in").show()
how to populate this select clause (of data_df) dynamically , from the country_df for given year ?
i.e. From first dataframe , i will get values of column , those are
the columns i need to select from second datafame. How can this be
done ?
Tried this :
List<String> aa = col_df.select(functions.lower(col("data_item_code"))).map(row -> row.mkString(" ",", "," "), Encoders.STRING()).collectAsList();
data_df.select(aa.stream().map(s -> new Column(s)).toArray(Column[]::new));
Error :
.AnalysisException: cannot resolve '` un `' given input columns: [abc,.....all columns ...]
So what is wrong here , and how to fix this ?
You can try with the below code.
Select the column name from the first dataset.
List<String> columns = country_df.select("country").where($"data_yr" === 2001).as(Encoders.STRING()).collectAsList();
Use the column names in selectexpr in second dataset.
public static Seq<String> convertListToSeq(List<String> inputList) {
return JavaConverters.asScalaIteratorConverter(inputList.iterator()).asScala().toSeq();
}
//using selectExpr
data_df.selectExpr(convertListToSeq(columns)).show(true);
scala> val colname = col_df.rdd.collect.toList.map(x => x(0).toString).toSeq
scala> data_df.select(colname.head, colname.tail: _*).show()
+----------+----------+
| us| in|
+----------+----------+
|us_state_1|in_state_1|
|us_state_2|in_state_2|
|us_state_3|in_state_3|
+----------+----------+
Using pivot you can get the values as column names directly like this:
val selectCols = col_df.groupBy().pivot($"country").agg(lit(null)).columns
data_df.select(selectCols.head, selectCols.tail: _*)

Formatting string content xtext 2.14

Given a grammar (simplified version below) where I can enter arbitrary text in a section of the grammar, is it possible to format the content of the arbitrary text? I understand how to format the position of the arbitrary text in relation to the rest of the grammar, but not whether it is possible to format the content string itself?
Sample grammar
Model:
'content' content=RT
terminal RT: // (returns ecore::EString:)
'RT>>' -> '<<RT';
Sample content
content RT>>
# Some sample arbitrary text
which I would like to format
<<RT
you can add custom ITextReplacer to the region of the string.
assuming you have a grammar like
Model:
greetings+=Greeting*;
Greeting:
'Hello' name=STRING '!';
you can do something like the follow in the formatter
def dispatch void format(Greeting model, extension IFormattableDocument document) {
model.prepend[newLine]
val region = model.regionFor.feature(MyDslPackage.Literals.GREETING__NAME)
val r = new AbstractTextReplacer(document, region) {
override createReplacements(ITextReplacerContext it) {
val text = region.text
var int index = text.indexOf(SPACE);
val offset = region.offset
while (index >=0){
it.addReplacement(region.textRegionAccess.rewriter.createReplacement(offset+index, SPACE.length, "\n"))
index = text.indexOf(SPACE, index+SPACE.length()) ;
}
it
}
}
addReplacer(r)
}
this will turn this model
Hello "A B C"!
into
Hello "A
B
C"!
of course you need to come up with a more sophisticated formatter logic.
see How to define different indentation levels in the same document with Xtext formatter too

Spark: how to write efficient sql query to achieve this goal

I have a json file whose structure is [{"time","currentStop","lat","lon","speed"}], here is an example:
[
{"time":"2015-06-09 23:59:59","currentStop":"xx","lat":"22.264856","lon":"113.520450","speed":"25.30"},
{"time":"2015-06-09 21:00:49","currentStop":"yy","lat":"22.263","lon":"113.52","speed":"34.5"},
{"time":"2015-06-09 21:55:49","currentStop":"zz","lat":"21.3","lon":"113.521","speed":"13.7"}
]
And I want to get json result which has structure [{"hour","value":["currentStop","lat","lon","speed"]}]. The result shows hourly data of distinct ("currentStop","lat","lon","speed"). Here is the result of the example(skip some empty values):
[
{"hour":0,"value":[]},
{"hour":1,"value":[]},
......
{"hour":21,"value":[{"currentStop":"yy","lat":"22.263","lon":"113.52","speed":"34.5"},{"currentStop":"zz","lat":"21.3","lon":"113.521","speed":"13.7"}]}
{"hour":23, "value": [{"currentStop":"xx","lat":22.264856,"lon":113.520450,"speed":25.30}]},
]
Is it possible to achieve this using spark-sql query?
I use spark with Java API, and with loops, I can get what I want, but this way is really inefficient and costs much.
Here is my code:
Dataset<Row> bus_ic=spark.read().json(file);
bus_ic.createOrReplaceTempView("view");
StringBuilder text = new StringBuilder("[");
bus_ic.select(bus_ic.col("currentStop"),
bus_ic.col("lon").cast("double"), bus_ic.col("speed").cast("double"),
bus_ic.col("lat").cast("double"),bus_ic.col("LINEID"),
bus_ic.col("time").cast("timestamp"))
.createOrReplaceTempView("view");
StringBuilder sqlString = new StringBuilder();
for(int i = 0; i<24; i++){
sqlString.delete(0,sqlString.length());
sqlString.append("select currentStop, speed, lat, lon from view where hour(time) = ")
.append(i)
.append(" group by currentStop, speed, lat, lon");
Dataset<Row> t = spark.sql(sqlString.toString());
text.append("{")
.append("\"h\":").append(i)
.append(",\"value\":")
.append(t.toJSON().collectAsList().toString())
.append("}");
if(i!=23) text.append(",");
}
text.append("]");
There must be some other ways to solve this problem. How to write efficient sql query to achieve this goal?
You can write your code in much more concise way (Scala code):
val bus_comb = bus_ic
.groupBy(hour(to_timestamp(col("time"))).as("hour"))
.agg(collect_set(struct(
col("currentStop"), col("lat"), col("lon"), col("speed")
)).alias("value"));
bus_comb.toJSON.show(false);
// +--------------------------------------------------------------------------------------------------------------------------------------------------------+
// |value |
// +--------------------------------------------------------------------------------------------------------------------------------------------------------+
// |{"hour":23,"value":[{"currentStop":"xx","lat":"22.264856","lon":"113.520450","speed":"25.30"}]} |
// |{"hour":21,"value":[{"currentStop":"yy","lat":"22.263","lon":"113.52","speed":"34.5"},{"currentStop":"zz","lat":"21.3","lon":"113.521","speed":"13.7"}]}|
// +--------------------------------------------------------------------------------------------------------------------------------------------------------+
but with only 24 grouping records, there is no opportunity for scaling out here. It might be an interesting exercise, but it is not something you can really apply on large dataset, where using Spark makes sense.
You can add missing hours by joining with range:
spark.range(0, 24).toDF("hour").join(bus_comb, Seq("hour"), "leftouter")

UDF to extract only the file name from path in Spark SQL

There is input_file_name function in Apache Spark which is used by me to add new column to Dataset with the name of file which is currently being processed.
The problem is that I'd like to somehow customize this function to return only file name, ommitting the full path to it on s3.
For now, I am doing replacement of the path on the second step using map function:
val initialDs = spark.sqlContext.read
.option("dateFormat", conf.dateFormat)
.schema(conf.schema)
.csv(conf.path).withColumn("input_file_name", input_file_name)
...
...
def fromFile(fileName: String): String = {
val baseName: String = FilenameUtils.getBaseName(fileName)
val tmpFileName: String = baseName.substring(0, baseName.length - 8) //here is magic conversion ;)
this.valueOf(tmpFileName)
}
But I'd like to use something like
val initialDs = spark.sqlContext.read
.option("dateFormat", conf.dateFormat)
.schema(conf.schema)
.csv(conf.path).withColumn("input_file_name", **customized_input_file_name_function**)
In Scala:
#register udf
spark.udf
.register("get_only_file_name", (fullPath: String) => fullPath.split("/").last)
#use the udf to get last token(filename) in full path
val initialDs = spark.read
.option("dateFormat", conf.dateFormat)
.schema(conf.schema)
.csv(conf.path)
.withColumn("input_file_name", get_only_file_name(input_file_name))
Edit: In Java as per comment
#register udf
spark.udf()
.register("get_only_file_name", (String fullPath) -> {
int lastIndex = fullPath.lastIndexOf("/");
return fullPath.substring(lastIndex, fullPath.length - 1);
}, DataTypes.StringType);
import org.apache.spark.sql.functions.input_file_name
#use the udf to get last token(filename) in full path
Dataset<Row> initialDs = spark.read()
.option("dateFormat", conf.dateFormat)
.schema(conf.schema)
.csv(conf.path)
.withColumn("input_file_name", get_only_file_name(input_file_name()));
Borrowing from a related question here, the following method is more portable and does not require a custom UDF.
Spark SQL Code Snippet: reverse(split(path, '/'))[0]
Spark SQL Sample:
WITH sample_data as (
SELECT 'path/to/my/filename.txt' AS full_path
)
SELECT
full_path
, reverse(split(full_path, '/'))[0] as basename
FROM sample_data
Explanation:
The split() function breaks the path into it's chunks and reverse() puts the final item (the file name) in front of the array so that [0] can extract just the filename.
Full Code example here :
spark.sql(
"""
|WITH sample_data as (
| SELECT 'path/to/my/filename.txt' AS full_path
| )
| SELECT
| full_path
| , reverse(split(full_path, '/'))[0] as basename
| FROM sample_data
|""".stripMargin).show(false)
Result :
+-----------------------+------------+
|full_path |basename |
+-----------------------+------------+
|path/to/my/filename.txt|filename.txt|
+-----------------------+------------+
commons io is natural/easiest import in spark means(no need to add additional dependency...)
import org.apache.commons.io.FilenameUtils
getBaseName(String fileName)
Gets the base name, minus the full path and extension, from a full fileName.
val baseNameOfFile = udf((longFilePath: String) => FilenameUtils.getBaseName(longFilePath))
Usage is like...
yourdataframe.withColumn("shortpath" ,baseNameOfFile(yourdataframe("input_file_name")))
.show(1000,false)

Converting CSV file to LIBSVM compatible data file using python

I am doing a project using libsvm and I am preparing my data to use the lib. How can I convert CSV file to LIBSVM compatible data?
CSV File:
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/datasets/data/iris.csv
In the frequencies questions:
How to convert other data formats to LIBSVM format?
It depends on your data format. A simple way is to use libsvmwrite in the libsvm matlab/octave interface. Take a CSV (comma-separated values) file in UCI machine learning repository as an example. We download SPECTF.train. Labels are in the first column. The following steps produce a file in the libsvm format.
matlab> SPECTF = csvread('SPECTF.train'); % read a csv file
matlab> labels = SPECTF(:, 1); % labels from the 1st column
matlab> features = SPECTF(:, 2:end);
matlab> features_sparse = sparse(features); % features must be in a sparse matrix
matlab> libsvmwrite('SPECTFlibsvm.train', labels, features_sparse);
The tranformed data are stored in SPECTFlibsvm.train.
Alternatively, you can use convert.c to convert CSV format to libsvm format.
but I don't wanna use matlab, I use python.
I found this solution as well using JAVA
Can anyone recommend a way to tackle this problem ?
You can use csv2libsvm.py to convert csv to libsvm data
python csv2libsvm.py iris.csv libsvm.data 4 True
where 4 means target index, and True means csv has a header.
Finally, you can get libsvm.data as
0 1:5.1 2:3.5 3:1.4 4:0.2
0 1:4.9 2:3.0 3:1.4 4:0.2
0 1:4.7 2:3.2 3:1.3 4:0.2
0 1:4.6 2:3.1 3:1.5 4:0.2
...
from iris.csv
150,4,setosa,versicolor,virginica
5.1,3.5,1.4,0.2,0
4.9,3.0,1.4,0.2,0
4.7,3.2,1.3,0.2,0
4.6,3.1,1.5,0.2,0
...
csv2libsvm.py does not work with Python3, and also it does not support label targets (string targets), I have slightly modified it. Now It should work with Python3 as well as wıth the label targets.
I am very new to Python, so my code may do not follow the best practices, but I hope it is good enough to help someone.
#!/usr/bin/env python
"""
Convert CSV file to libsvm format. Works only with numeric variables.
Put -1 as label index (argv[3]) if there are no labels in your file.
Expecting no headers. If present, headers can be skipped with argv[4] == 1.
"""
import sys
import csv
import operator
from collections import defaultdict
def construct_line(label, line, labels_dict):
new_line = []
if label.isnumeric():
if float(label) == 0.0:
label = "0"
else:
if label in labels_dict:
new_line.append(labels_dict.get(label))
else:
label_id = str(len(labels_dict))
labels_dict[label] = label_id
new_line.append(label_id)
for i, item in enumerate(line):
if item == '' or float(item) == 0.0:
continue
elif item=='NaN':
item="0.0"
new_item = "%s:%s" % (i + 1, item)
new_line.append(new_item)
new_line = " ".join(new_line)
new_line += "\n"
return new_line
# ---
input_file = sys.argv[1]
try:
output_file = sys.argv[2]
except IndexError:
output_file = input_file+".out"
try:
label_index = int( sys.argv[3] )
except IndexError:
label_index = 0
try:
skip_headers = sys.argv[4]
except IndexError:
skip_headers = 0
i = open(input_file, 'rt')
o = open(output_file, 'wb')
reader = csv.reader(i)
if skip_headers:
headers = reader.__next__()
labels_dict = {}
for line in reader:
if label_index == -1:
label = '1'
else:
label = line.pop(label_index)
new_line = construct_line(label, line, labels_dict)
o.write(new_line.encode('utf-8'))

Categories

Resources