Read Excel and Create Json Req - java

I have an excel with some data, how do we create a json request reading that data from excel using java, i am almost new to json ,
Excel :- RID COID IssD ExpD Desc XValue XSource 1 2 3 4 5 6 7
JSON :- ??
Thanks in advance.

Related

Jmeter: Using CSV file with uneven columns to test drive sampler

I have a csv file that has a list of stores. For every Store there are 10 departments.
I will need to make a GET API call for all the 10 departments in 100 stores. So my columns in CSV file are not eve. I will have column A with 100 store IDs, and column B with 10 department IDs.
How can I use every Store ID 10 times (once with every department ID) in Jmeter sampler?
If you want to achieve this using CSV Data Set Config - the only way is splitting your CSV file into 2 separate files
If the CSV file comes from external source and cannot be changed - you can consider using __groovy() function like:
${__groovy(new File('test.csv').readLines().get(vars.get('__jm__Loop Controller - Store__idx') as int).split('\,')[0],)}
Given example CSV file test.csv with the following contents:
store1,department1
store2,department2
,department3
,department4
,department5
,department6
,department7
,department8
,department9
,department10
You can achieve your requirement using below approach:
More information on Groovy scripting in JMeter: Apache Groovy - Why and How You Should Use It

Is it possible to fetch and compare an element which resides in a nested Json string format column in database? Via SQL Query

I have 'a unique-id'. I want to fetch records from table on basis of that unique-id. I have a column named "request body" that contains a nested json string which is of type text. Is there any way i can compare 'unique-id' with the 'unique-id' inside the json string cloumn i-e request body?
Apologies, I am new to stackoverflow.
For anyone looking for the solution, below are the two approaches:
APPROACH 1
SELECT t.request_body
FROM table t
WHERE cast(request_body as JSON) ->> 'uniqueId' ='123'
APPROACH 2
SELECT t.request_body
FROM table t
WHERE (substr(t.request_body, position ('uniqueId' in request_body)+11,3) ='123'
Note: 11 represents lenght of 'uniqueId":"' and 3 for the following id 123

Database DataFrame Null values not coming to Json File

I have a database containing null values in some columns and I am converting the dataframe formed from the database to Json file. The problem here is that I am not getting the null columns. Here is the code as well as the output:
dataFrame.show();
dataFrame.na().fill("null").coalesce(1)
.write()
.mode("append")
.format("Json")
.option("nullValue", "")
.save("D:\\XML File Testing\\"+"JsonParty1");
The dataframe.show() gives the following output:
[![Dataframe as processed by the spark][1]][1]
[1]: https://i.stack.imgur.com/XxAQC.png
Here is how it is being saved in the File (I am pasting just 1 column just to show you the example):
{"EMPNO":7839,"ENAME":"KING","JOB":"PRESIDENT","HIREDATE":"1981-11-17T00:00:00.000+05:30","SAL":5000.00,"DEPTNO":10}
As you can see my "MGR" and "comm" column is missing because it is showing null in the dataframe. Surprisingly this thing works when the dataframe is formed from a file(Structured, example:delimited txt file) containing empty values(the spark dataframe takes it as null). Tried various approaches but still failed to get the null columns in the Json file. Any help would be much appreciated.
Try this:
import org.apache.spark.sql.functions._
dataFrame.withColumn("json", to_json(struct(dataFrame.columns.map(col):_*)
.select("json").write.mode("append").text("D:\\XML File Testing\\"+"JsonParty1")

Getting data from two csv using spark (Java)

I have 2 csv files .
Employee.csv with the schema
EmpId Fname
1 John
2 Jack
3 Ram
and 2nd csv file as
Leave.csv
EmpId LeaveType Designation
1 Sick SE
1 Casual SE
2 Sick SE
3 Privilege M
1 Casual SE
2 Privilege SE
Now I want the data in json as
EmpID-1
Sick : 2
Casual : 2
Privilege : 0
Using spark in Java
Grouping by the column 'LeaveType' and perfoming count on them
import org.apache.spark.sql.functions.count
val leaves = ??? // Load leaves
leaves.groupBy(col("LeaveType")).agg(count(col("LeaveType").as("total_leaves")).show()
I'm not familiar with Java syntax but if you do not want to use the dataframe API, you may do something like this in scala,
val rdd= sc.textfile("/path/to/leave.csv").map(_.split(",")).map(x=>((x(0),x(1),x(2)),1)).reduceByKey(_+_)
now you need to use some external API like GSON to transform each element of this RDD to desired JSON format. Each element of this rdd is a Tuple4, in which there is (EmpId, leaveType, Designation, Countofleaves)
Let me know if this helped, Cheers.

Grouping of data based on week using apache spark

I am new bee to spark, I have around 15 TB data in mongo
ApplicationName Name IPCategory Success Fail CreatedDate
abc a.com cd 3 1 25-12-2015 00:00:00
def d.com ty 2 2 25-12-2015 01:20:00
abc b.com cd 5 0 01-01-2015 06:40:40
I am looking for based on ApplicationName, groupby (Name,IpCategory) for one week data.I am able to fetch data from mongo and save output to mongo. I am working on it using java.
NOTE:- From one month data I need only last week. It should be groupby(Name,IPCategory).

Categories

Resources