Update Multiple row using hibernate ORM - java

Table Name : Country
-----+-----------------+--------------------+-----------------
id | country_name | country_short_name | country_full_name
-----+-----------------+--------------------+-----------------
1 | Bagladesh | BD |Bagladesh
-----+-----------------+--------------------+-----------------
2 | Bagladesh | BCDD |sdriij
-----+-----------------+--------------------+-----------------
3 | India | IND |India
-----+-----------------+--------------------+-----------------
in laravel i update multiple row using
Country::where('country_name ', '=', "Bagladesh" )
->update(array('country_short_name' => "BD",
'country_full_name' => "Bangladesh",
));
i want to do using Hibernate

Related

Add column to a Dataset based on the value from Another Dataset

I have a dataset dsCustomer that have the customer details with columns
|customerID|idpt | totalAmount|
|customer1 | H1 | 250 |
|customer2 | H2 | 175 |
|customer3 | H3 | 4000 |
|customer4 | H3 | 9000 |
I have another dataset dsCategory that contains the category based on the amount sales
|categoryID|idpt | borne_min|borne_max|
|A | H2 | 0 |1000 |
|B | H2 | 1000 |5000 |
|C | H2 | 5000 |7000 |
|D | H2 | 7000 |10000 |
|F | H3 | 0 |1000 |
|G | H3 | 1000 |5000 |
|H | H3 | 5000 |7000 |
|I | H3 | 7000 |1000000 |
I would like to have a result that is taking the totalAmount of Customer and find the category.
|customerID|idpt |totalAmount|category|
|customer1 | H1 | 250 | null |
|customer2 | H2 | 175 | A |
|customer3 | H3 | 4000 | G |
|customer4 | H3 | 9000 | I |
//udf
public static Column getCategoryAmount(Dataset<Row> ds, Column amountColumn) {
return ds.filter(amountColumn.geq(col("borne_min"))
.and(amountColumn.lt(col("borne_max")))).first().getAs("categoryID");
}
//code to add column to my dataset
dsCustomer.withColumn("category", getCategoryAmount(dsCategory , dsCustomer.col("totalAmount")));
How can i pass the value of column from my dataset of customer to my UDF function
Because the error is showing that totalAmount is not contains in the category dataset
Question: How can i use Map to for each row in the dsCustomer i should go and check they value in dsCategory.
I have tried to join the 2 tables but it is working because the dsCustomer should maintain the same records just added the calulated column picked from dsCategory.
caused by: org.apache.spark.sql.AnalysisException: cannot resolve '`totalAmount`' given input columns: [categoryID,borne_min,borne_max];;
'Filter (('totalAmount>= borne_min#220) && ('totalAmount < borne_max#221))
You have to join the two datasets. withColumn only allows modifications of the same Dataset.
UPDATE
I did not have time before to explain in detail what I mean. This is what I was trying to explain. You can join two dataframes. In your case you need a left join to preserve rows which don't have a matching category.
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
cust = [
('customer1', 'H1', 250),
('customer2', 'H2', 175),
('customer3', 'H3', 4000),
('customer4', 'H3', 9000)
]
cust_df = spark.createDataFrame(cust, ['customerID', 'idpt', 'totalAmount'])
cust_df.show()
cat = [
('A', 'H2', 0 , 1000),
('B', 'H2', 1000, 5000),
('C', 'H2', 5000, 7000),
('D', 'H2', 7000, 10000),
('F', 'H3', 0 , 1000),
('G', 'H3', 1000, 5000),
('H', 'H3', 5000, 7000),
('I', 'H3', 7000, 1000000)
]
cat_df = spark.createDataFrame(cat, ['categoryID', 'idpt', 'borne_min', 'borne_max'])
cat_df.show()
cust_df.join(cat_df,
(cust_df.idpt == cat_df.idpt) &
(cust_df.totalAmount >= cat_df.borne_min) &
(cust_df.totalAmount <= cat_df.borne_max)
, how='left') \
.select(cust_df.customerID, cust_df.idpt, cust_df.totalAmount, cat_df.categoryID) \
.show()
Output
+----------+----+-----------+
|customerID|idpt|totalAmount|
+----------+----+-----------+
| customer1| H1| 250|
| customer2| H2| 175|
| customer3| H3| 4000|
| customer4| H3| 9000|
+----------+----+-----------+
+----------+----+---------+---------+
|categoryID|idpt|borne_min|borne_max|
+----------+----+---------+---------+
| A| H2| 0| 1000|
| B| H2| 1000| 5000|
| C| H2| 5000| 7000|
| D| H2| 7000| 10000|
| F| H3| 0| 1000|
| G| H3| 1000| 5000|
| H| H3| 5000| 7000|
| I| H3| 7000| 1000000|
+----------+----+---------+---------+
+----------+----+-----------+----------+
|customerID|idpt|totalAmount|categoryID|
+----------+----+-----------+----------+
| customer1| H1| 250| null|
| customer3| H3| 4000| G|
| customer4| H3| 9000| I|
| customer2| H2| 175| A|
+----------+----+-----------+----------+

Spark Dataset - NullPointerException while doing a filter on dataset

I have 2 datasets with me as shown below. I'm trying to find out how many products are associated with each game. Basically, I'm trying to keep a count of the number of products associated.
scala> df1.show()
gameid | games | users | cnt_assoc_prod
-------------------------------------------
1 | cricket |[111, 121] |
2 | basketball|[211] |
3 | skating |[101, 100, 98] |
scala> df2.show()
user | products
----------------------
98 | "shampoo"
100 | "soap"
101 | "shampoo"
111 | "shoes"
121 | "honey"
211 | "shoes"
I'm trying to iterate through each of df1's users from the array and find the corresponding row in df2 by applying the filter on column matching the user.
df1.map{x => {
var assoc_products = new Set()
x.users.foreach(y => assoc_products + df2.filter(z => z.user == y).first().
products)
x.cnt_assoc_prod = assoc_products.size
}
While applying filter I get following Exception
java.lang.NullPointerException
at org.apache.spark.sql.Dataset.logicalPlan(Dataset.scala:784)
at org.apache.spark.sql.Dataset.mapPartitions(Dataset.scala:344)
at org.apache.spark.sql.Dataset.filter(Dataset.scala:307)
I'm using spark version 1.6.1.
You can explode the users column in df1, join with df2 on the user column, then do the groupBy count:
(df1.withColumn("user", explode(col("users")))
.join(df2, Seq("user"))
.groupBy("gameid", "games")
.agg(count($"products").alias("cnt_assoc_prod"))
).show
+------+----------+--------------+
|gameid| games|cnt_assoc_prod|
+------+----------+--------------+
| 3| skating| 3|
| 2|basketball| 1|
| 1| cricket| 2|
+------+----------+--------------+

How to perform a query using a field that is a merge of 2 columns?

I'm building up a series of distribution analysis using Java Spark library. This is the actual code I'm using to fetch the data from a JSON file and save the output.
Dataset<Row> dataset = spark.read().json("local/foods.json");
dataset.createOrReplaceTempView("cs_food");
List<GenericAnalyticsEntry> menu_distribution= spark
.sql(" ****REQUESTED QUERY ****")
.toJavaRDD()
.map(row -> Triple.of( row.getString(0), BigDecimal.valueOf(row.getLong(1)), BigDecimal.valueOf(row.getLong(2))))
.map(GenericAnalyticsEntry::of)
.collect();
writeObjectAsJsonToHDFS(fs, "/local/output/menu_distribution_new.json", menu_distribution);
The query I'm looking for is based on this structure:
+------------+-------------+------------+------------+
| FIRST_FOOD | SECOND_FOOD | DATE | IS_SPECIAL |
+------------+-------------+------------+------------+
| Pizza | Spaghetti | 11/02/2017 | TRUE |
+------------+-------------+------------+------------+
| Lasagna | Pizza | 12/02/2017 | TRUE |
+------------+-------------+------------+------------+
| Spaghetti | Spaghetti | 13/02/2017 | FALSE |
+------------+-------------+------------+------------+
| Pizza | Spaghetti | 14/02/2017 | TRUE |
+------------+-------------+------------+------------+
| Spaghetti | Lasagna | 15/02/2017 | FALSE |
+------------+-------------+------------+------------+
| Pork | Mozzarella | 16/02/2017 | FALSE |
+------------+-------------+------------+------------+
| Lasagna | Mozzarella | 17/02/2017 | FALSE |
+------------+-------------+------------+------------+
How can I achieve this (written below) output from the code written above?
+------------+--------------------+----------------------+
| FOODS | occurrences(First) | occurrences (Second) |
+------------+--------------------+----------------------+
| Pizza | 2 | 1 |
+------------+--------------------+----------------------+
| Lasagna | 2 | 1 |
+------------+--------------------+----------------------+
| Spaghetti | 2 | 3 |
+------------+--------------------+----------------------+
| Mozzarella | 0 | 2 |
+------------+--------------------+----------------------+
| Pork | 1 | 0 |
+------------+--------------------+----------------------+
I've of course tried to figure out a solution by myself but had no luck with the my tries, I may be wrong, but I need something like this:
"SELECT (first_food + second_food) as menu, COUNT(first_food), COUNT(second_food) from cs_food GROUP BY menu"
From the example data, this looks like it will produce the output you want:
select
foods,
first_count,
second_count
from
(select first_food as food from menus
union select second_food from menus) as f
left join (
select first_food, count(*) as first_count from menus
group by first_food
) as ff on ff.first_food=f.food
left join (
select second_food, count(*) as second_count from menus
group by second_food
) as sf on sf.second_food=f.food
;
Simple combination of flatMap and groupBy should do the job like this (sorry, can't check if it 100% correct right now):
import spark.sqlContext.implicits._
val df = Seq(("Pizza", "Pasta"), ("Pizza", "Soup")).toDF("first", "second")
df.flatMap({case Row(first: String, second: String) => Seq((first, 1, 0), (second, 0, 1))})
.groupBy("_1")

Select 1 item per attribute value in Spring Data MongoRepository

I have a collection of objects in MongoDB and am using Spring Data MongoDB.
My collection of entities look something like this:
--------------------------------------------
| id | snapshot | name |
--------------------------------------------
| 2 | somedate | bla |
| 2 | somedate | foo |
| 3 | somedate | bar |
| 3 | somedate | cheese |
| 6 | somedate | milk |
| 6 | somedate | lorum |
| 6 | somedate | ipsum |
| 9 | somedate | do |
| 10 | somedate | re |
| 10 | somedate | mi |
| 15 | somedate | fa |
--------------------------------------------
I want to get a list of objects where I want to have only one object of each distinct id, the object for that id should be the one with the latest date.
My result should be something like this:
--------------------------------------------
| id | snapshot | name |
--------------------------------------------
| 2 | somedate | bla |
| 3 | somedate | bar |
| 6 | somedate | milk |
| 9 | somedate | do |
| 10 | somedate | mi |
| 15 | somedate | fa |
--------------------------------------------
Is this possible in using a MongoRepository query?
I'd appreciate any help.
With the aggregation framework it's possible. Run the following aggregation operation to get the desired result:
db.collection.aggregate([
{ "$sort": { "snapshot": -1 } },
{
"$group": {
"_id": "$id",
"snapshot": { "$first": "$snapshot" },
"name": { "$first": "$name" }
}
}
])
The above native aggregation operation can then be translated to Spring Data MongoDB aggregation as:
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*;
TypedAggregation<Entity> aggregation = newAggregation(Entity.class,
sort(DESC, "snapshot"),
group("id")
.first("snapshot").as("snapshot")
.first("name").as("name")
);
AggregationResults<EntityStats> result = mongoTemplate.aggregate(aggregation, EntityStats.class);

MySQL SUBTRACT with SUM Query for One Single Column Same Table

Transactions_Table:
+---------+--------+-------------+--------------+-----+
| DocType | SFCode | Productname | WarrantyCode | QTY |
+---------+--------+-------------+--------------+-----+
| FP | 12 | Item | 1111-01 | 100 | -100
| FP | 12 | Item | 2222-22 | 200 |
| FP | 12 | Item | 3333-33 | 350 | -350
| LP | 12 | Item | 4444-44 | 10 |
| LP | 12 | Item | 5555-55 | 20 |
| LP | 12 | Item | 6666-66 | 35 | -35
| CAS | 12 | Item | 1111-01 | 50 | -(50 Left, show)
| CRS | 12 | Item | 3333-33 | 120 | -(230 Left, show)
| CRS | 12 | Item | 6666-66 | 35 | -(0 Left, no show)
| FPR | 12 | Item | 1111-01 | 10 | -(40 Left, show)
| LPR | 12 | Item | 5555-55 | 20 | -(0 Left, no show)
| CSR | 12 | Item | 1111-01 | 5 | -(50+5 Left, show)
| CRR | 12 | Item | 6666-66 | 5 | -(Got back 5, show)
+---------+--------+-------------+--------------+-----|
KEY:
FP: Foreign Purchase
LP: Local Purchase
CAS: Cash Sale
CRS: Credit Sale
FPR: Foreign Purchase Return
LPR: Local Purchase Return
CSR: Cash Sale Return
CRR: Credit Sale Return
There are many products but for now focussing on a single SFCode "12".
QTY is the Physical Stock PRESENT in the store, and the DocType are the transactions.
There are 2 Things I need to do with this table.
Get Current Stock which is (FP+LP+CSR+CRR) - (FPR+LPR+CAS+CRS) Note: There maybe no transaction of a particular DocType
Get Warranty Code(s) for a Product which has not been Sold Out for a particular Warranty Code. Go from Top to Bottom in Table last Column (not named) and you will get the idea.
Please suggest Java-MySql statement(s) that will help me achieve this result. Any help is appreciated.
Try something like this for #1:
SELECT SFCode, SUM(FP+LP+CSR+CRR-FPR-LPR-CAS-CRS) AS Total FROM
(SELECT SFCode,
SUM(IF(DocType = "FP", QTY, 0)) AS FP,
>>please fill out all the columns<<
FROM Transactions_Table
WHERE SFCode = "12"
GROUP BY DocType);
This is my shot at #2: (This assumes SFCode isn't an integer)
SELECT a.SFCode, a.WarrantyCode, (a.QTY-b.QTY) AS Stock FROM
(SELECT SFCode, WarrantyCode, QTY
FROM Transactions_Table
WHERE SFCode = "12"
AND DocType IN ('FP','LP','CSR','CRR')
GROUP BY WarrantyCode) AS a
LEFT JOIN
(SELECT SFCode, WarrantyCode, QTY
FROM Transactions_Table
WHERE SFCode = "12"
AND DocType IN ('FPR','LPR','CAS','CRS')
GROUP BY WarrantyCode) AS b
ON a.SFCode = b.SFCode AND a.WarrantyCode = b.WarrantyCode;
Can't really test this myself right now but this should at least give you an idea.

Categories

Resources