Why Cassandra upserts some rows with unique primary key during multiple insertion? - java

This is the entity class for which the table would be created on cassandra database.
#Table
public class Log {
#PrimaryKeyColumn(ordinal = 0, type = PrimaryKeyType.PARTITIONED)
private LocalDate date; //partition key
#PrimaryKeyColumn(ordinal = 1, type = PrimaryKeyType.PARTITIONED)
private String name; //partition key
#PrimaryKeyColumn(ordinal = 2, type = PrimaryKeyType.PARTITIONED)
private String catagory; //partition key
#PrimaryKeyColumn(ordinal = 3, type = PrimaryKeyType.CLUSTERED)
private String timeStamp; //clustering column
private String type;
private String description;
}
The primary key is ((date,name,category),timestamp) where (date,name,category) is partition key and timestamp is clustering column.
This is the expected data when insertion would be performed
Date | name | category| timestamp | type | description |
------------------------------------------------------------------
2017-10-11 | 201 | General | 19:51:18.732 | short | desc1 |
-------------------------------------------------------------------
2017-10-11 | 201 | General | 19:51:18.735 | long | desc2 |
-------------------------------------------------------------------
2017-10-11 | 201 | General | 19:51:18.736 | short | desc3 |
-------------------------------------------------------------------
2017-10-11 | 201 | General | 19:51:18.738 | long | desc4 |
-------------------------------------------------------------------
2017-10-11 | 201 | General | 19:51:18.740 | short | desc5 |
-------------------------------------------------------------------
2017-10-11 | 201 | General | 19:51:18.741 | short | desc6 |
-------------------------------------------------------------------
2017-10-11 | 201 | General | 19:51:18.743 | long | desc7 |
-------------------------------------------------------------------
2017-10-11 | 201 | General | 19:51:18.745 | short | desc8 |
-------------------------------------------------------------------
Below data is stored in Cassandra when insertion is performed using CrudRepository's save method. Some rows are upserted. The rows with description desc3, desc5 are missing.
Date | name | category| timestamp | type | description |
------------------------------------------------------------------
2017-10-11 | 201 | General | 19:51:18.732 | short | desc1 |
-------------------------------------------------------------------
2017-10-11 | 201 | General | 19:51:18.735 | long | desc2 |
-------------------------------------------------------------------
2017-10-11 | 201 | General | 19:51:18.738 | long | desc4 |
-------------------------------------------------------------------
2017-10-11 | 201 | General | 19:51:18.741 | short | desc6 |
-------------------------------------------------------------------
2017-10-11 | 201 | General | 19:51:18.743 | long | desc7 |
-------------------------------------------------------------------
2017-10-11 | 201 | General | 19:51:18.745 | short | desc8 |
-------------------------------------------------------------------
Repo being used is an interface which extends CrudRepository as follows:
public interface LogRepo extends CrudRepository<Log , LocalDate>
{
...
}
Insertion code performed is as follows:
for(...){
..
logRepo.save(log); //each object is saved
..
}
Since clustering column is timestamp and is unique in this case, ideally the rows should not have been upserted. Please let know if there are anything that is missing to avoid upserting of records.

Related

Springfox : Short type is considered as $int32

Language
Java / Springboot / Springfox[ 2.9.2]
Description
Hi, i'm using springfox-swagger-ui and springfox-bean-validators
How can i tell to swagger that my property is a Short ($int16)
My Pojo
#ApiModelProperty(required = true, dataType = "java.lang.Short")
#NotNull
#JsonProperty("deviceId")
private Short deviceId;
The result in Swagger
deviceId* | integer($int32)
Expected
deviceId* | $int16
Thanks a lot
Cordially
Unfortunately you can't.
Swagger specification does not support short (int16) data type.
Supported data types:
+-------------+---------+-----------+--------------------------------------------------+
| Common Name | type | format | Comments |
+-------------+---------+-----------+--------------------------------------------------+
| integer | integer | int32 | signed 32 bits |
| long | integer | int64 | signed 64 bits |
| float | number | float | |
| double | number | double | |
| string | string | | |
| byte | string | byte | base64 encoded characters |
| binary | string | binary | any sequence of octets |
| boolean | boolean | | |
| date | string | date | As defined by full-date - RFC3339 |
| dateTime | string | date-time | As defined by date-time - RFC3339 |
| password | string | password | Used to hint UIs the input needs to be obscured. |
+-------------+---------+-----------+--------------------------------------------------+

Memory leak with hashMap

I have a memory leak problem that I need to resolve it.
I have this File can help me to find the memory leak
2'777'369'064 (62.72%) [32] 8 class */planning/canvas/shared/serializable/ActionCycleSZ 0x68759f768
|- 2'777'365'536 (62.72%) [256] 35 org/apache/catalina/loader/WebappClassLoader 0x688ce9df8
| |- 2'775'589'272 (62.68%) [48] 1 java/util/HashMap 0x688ceabe0
| | |- 2'775'589'224 (62.68%) [32'784] 3'533 array of java/util/HashMap$Entry 0x689af74c0
| | |- 2'763'509'944 (62.41%) [24] 2 java/util/HashMap$Entry 0x68a0b1f98
| | | |- 2'763'509'744 (62.41%) [40] 1 org/apache/catalina/loader/ResourceEntry 0x68a0b1fb0
| | | | |- 2'763'509'704 (62.41%) [32] 41 class */gwt/server/servlet/TaProjectsSessionManager 0x68653c8e8
| | | | |- 2'763'047'360 (62.4%) [32] 6 class */selfservice/SelfConfigurator 0x6875922a0
| | | | | |- 2'763'047'328 (62.4%) [16] 2 */gwt/server/servlet/TaProjectsSessionManager$1 0x689aee328
| | | | | | |- 2'154'573'968 (48.66%) [160] 30 */impl/HRSessionImpl 0x689ee49f8
| | | | | | | |- 2'138'350'824 (48.29%) [32] 3 java/util/Collections$SynchronizedMap 0x689ee4c70
| | | | | | | | |- 2'138'350'760 (48.29%) [64] 3 org/apache/commons/collections/map/LRUMap 0x689ee5218
| | | | | | | | | |- 2'134'913'368 (48.21%) [32] 2 org/apache/commons/collections/map/AbstractLinkedMap$LinkEntry 0x68a3573d0
| | | | | | | | | |- 3'437'328 (0.08%) [2'064] 121 array of org/apache/commons/collections/map/AbstractHashedMap$HashEntry 0x68a356bc8
| | | | | | | | | |- 16 (0%) [16] 1 org/apache/commons/collections/map/AbstractHashedMap$KeySet 0x69d443088
| | | | | | | | |- 32 (0%) [16] 2 java/util/Collections$SynchronizedSet 0x69d443098
| | | | | | | | |- 2'138'350'824 (48.29%) [32] 3 java/util/Collections$SynchronizedMap 0x689ee4c70
| | | | | | | |- 16'078'096 (0.36%) [104] 19 */impl/Dictionary 0x689ee4ad8
I conclude that the class ActionCycleSZ produce the memory leak
this is ActionCycleSZ
public class ActionCycleSZ extends ActionDTO implements IsSerializable {
private CycleSZ bean;
public ActionCycleSZ() {
}
public ActionCycleSZ(Type actionType, CycleSZ bean ) {
super(actionType);
this.bean = bean;
}
public CycleSZ getBean(){
return bean;
}
public void setBean(CycleSZ bean){
this.bean = bean;
}
}
public class CycleSZ implements Serializable{
/**
*
*/
private static final long serialVersionUID = 1L;
String cycleLabel;
Date startDate;
Date endDate;
String startDateDTO;
String endDateDTO;
Integer numlign;
String accumulatedHours;
List<SiteSZ> listOfSites = new LinkedList<SiteSZ>();
//getter and setter
}
public class SiteSZ implements Serializable {
/**
*
*/
private static final long serialVersionUID = 1L;
int week;
String siteLabel;
Date startDate;
Date endDate;
String startHour;
String endHour;
String site;
String time;
String particularSlotTime;
Integer numlign;
DaySZ dayAttribute;
String accumulatedWeekHours;
Map<Util.WeekDays,DaySZ> mapAttributes = new
LinkedHashMap<Util.WeekDays,DaySZ>();
boolean workedDay; //Flag for Exceptional Canevas Entry
boolean reposHebdo;
String contratId; //contratId for Exceptional Canevas Entry
In all ActionCycleSZ I have just this Map Map mapAttributes = new
LinkedHashMap();
I think that is the problem of the leak. Do I think right? I see the code and I don't see something like a memory leak.
Can anyone help me how to detect this memory leak or give me examples of memory leak due to hashmap

Restriction on #OneToMany mapping

I have a product class where there is an #OneToMany association for a list of buyers. What I want to do is that buyer search performed by the association when I search for a product, use a null constraint for the end date column of the Buyer table. How to do this in a list mapping like this, below.
// it would be something I needed cri.createCriteria("listaBuyer", "buyer).add(Restriction.isNull("finalDate"));
Example
Registered data
product code | initial date | final date |
-------------------------------------------------------
1 | 2016-28-07 | 2017-28-07 |
------------------------------------------------------
2 | 2016-10-08 | 2017-28-07 |
------------------------------------------------------
3 | 2017-28-08 | |
-----------------------------------------------------
4 | 2017-30-08 | |
Product Class
public class Product {
#OneToMany(targetEntity=Buyer.class, orphanRemoval=true, cascade={CascadeType.PERSIST,CascadeType.MERGE}, mappedBy="product")
#LazyCollection(LazyCollectionOption.FALSE)
public List<Buyer> getListaBuyer() {
if (listaBuyer == null) {
listaBuyer = new ArrayList<Buyer>();
}
return listaBuyer;
}
built criterion
Criteria cri = getSession().createCriteria(Product.class);
cri.createCriteria("status", "sta");
cri.add(Restrictions.eq("id", Product.getId()));
return cri.list();
Expected outcome
product code | initial date | final date |
-------------------------------------------------------
3 | 2017-28-08 | |
-----------------------------------------------------
4 | 2017-30-08 | |
Returned result
product code | initial date | final date |
-------------------------------------------------------
1 | 2016-28-07 | 2017-28-07 |
------------------------------------------------------
2 | 2016-10-08 | 2017-28-07 |
------------------------------------------------------
3 | 2017-28-08 | |
-----------------------------------------------------
4 | 2017-30-08 | |

Select 1 item per attribute value in Spring Data MongoRepository

I have a collection of objects in MongoDB and am using Spring Data MongoDB.
My collection of entities look something like this:
--------------------------------------------
| id | snapshot | name |
--------------------------------------------
| 2 | somedate | bla |
| 2 | somedate | foo |
| 3 | somedate | bar |
| 3 | somedate | cheese |
| 6 | somedate | milk |
| 6 | somedate | lorum |
| 6 | somedate | ipsum |
| 9 | somedate | do |
| 10 | somedate | re |
| 10 | somedate | mi |
| 15 | somedate | fa |
--------------------------------------------
I want to get a list of objects where I want to have only one object of each distinct id, the object for that id should be the one with the latest date.
My result should be something like this:
--------------------------------------------
| id | snapshot | name |
--------------------------------------------
| 2 | somedate | bla |
| 3 | somedate | bar |
| 6 | somedate | milk |
| 9 | somedate | do |
| 10 | somedate | mi |
| 15 | somedate | fa |
--------------------------------------------
Is this possible in using a MongoRepository query?
I'd appreciate any help.
With the aggregation framework it's possible. Run the following aggregation operation to get the desired result:
db.collection.aggregate([
{ "$sort": { "snapshot": -1 } },
{
"$group": {
"_id": "$id",
"snapshot": { "$first": "$snapshot" },
"name": { "$first": "$name" }
}
}
])
The above native aggregation operation can then be translated to Spring Data MongoDB aggregation as:
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*;
TypedAggregation<Entity> aggregation = newAggregation(Entity.class,
sort(DESC, "snapshot"),
group("id")
.first("snapshot").as("snapshot")
.first("name").as("name")
);
AggregationResults<EntityStats> result = mongoTemplate.aggregate(aggregation, EntityStats.class);

how to get communicate with 3 tables

Hi I have a table parent and its fields are
mysql> select * from parent;
+----+----------+------------+-----------------------------------+---------+------+
| id | category | is_deleted | name | version | cid |
+----+----------+------------+-----------------------------------+---------+------+
| 1 | default | | Front Office | 0 | NULL |
| 2 | default | | Food And Beverage | 0 | NULL |
| 3 | default | | House Keeping | 0 | NULL |
| 4 | default | | General | 0 | NULL |
| 5 | client | | SPA | 0 | NULL |
| 7 | client | | house | 0 | NULL |
| 8 | client | | test | 0 | NULL |
| 9 | client | | ggg | 0 | 1 |
| 10 | client | | dddd | 0 | 1 |
| 11 | client | | test1 | 0 | 1 |
| 12 | client | | java | 0 | 1 |
| 13 | client | | dcfdcddd | 0 | 1 |
| 14 | client | | qqqq | 0 | 1 |
| 15 | client | | nnnnnn | 0 | 1 |
| 16 | client | | category | 0 | 1 |
| 17 | client | | sukant | 0 | 1 |
| 18 | client | | bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb | 0 | 1 |
I have another table parent_question
mysql> select * from parent_question;
+----+------------+---------+-----+------+
| id | is_deleted | version | pid | qid |
+----+------------+---------+-----+------+
| 1 | | 0 | 1 | 1 |
| 2 | | 0 | 1 | 2 |
| 3 | | 0 | 1 | 3 |
| 4 | | 0 | 1 | 4 |
| 5 | | 0 | 1 | 5 |
| 6 | | 0 | 1 | 6 |
| 7 | | 0 | 2 | 7 |
| 8 | | 0 | 2 | 1 |
| 9 | | 0 | 2 | 2 |
| 10 | | 0 | 2 | 8 |
| 11 | | 0 | 3 | 9 |
| 12 | | 0 | 3 | 1 |
| 13 | | 0 | 3 | 10 |
| 14 | | 0 | 3 | 11 |
| 15 | | 0 | 4 | 12 |
| 16 | | 0 | 1 | 1 |
| 17 | | 0 | 1 | 2 |
| 18 | | 0 | 1 | 3 |
| 19 | | 0 | 5 | 13 |
| 20 | | 0 | 2 | 7 |
| 21 | | 0 | 2 | 2 |
| 22 | | 0 | 1 | 14 |
| 23 | | 0 | 1 | 15 |
| 24 | | 0 | 1 | 16 |
| 25 | | 1 | 1 | 17 |
| 26 | | 0 | 1 | 21 |
| 27 | | 0 | 2 | 22 |
| 28 | | 0 | 13 | 23 |
| 29 | | 0 | 9 | 24 |
| 30 | | 0 | 12 | 25 |
| 31 | | 0 | 12 | 26 |
| 32 | | 0 | 12 | 27 |
| 33 | | 0 | 12 | 28 |
| 34 | | 0 | 14 | 29 |
| 35 | | 0 | 15 | 30 |
| 36 | | 0 | 10 | 31 |
| 37 | | 0 | 4 | 32 |
| 38 | | 0 | 16 | 33 |
| 39 | | 0 | 10 | 34 |
| 40 | | 0 | 3 | 35 |
| 41 | | 0 | 17 | 36 |
| 42 | | 0 | 1 | 37 |
| 43 | | 0 | 1 | 38 |
| 44 | | 0 | 18 | 39 |
| 45 | | 0 | 18 | 40 |
+----+------------+---------+-----+------+
45 rows in set (0.00 sec)
and this is my question table
ysql> select * from question;
----+----------+------------+------------------------------------------------------------+--
id | category | is_deleted | question | v
----+----------+------------+------------------------------------------------------------+--
1 | default | | Staff Courtesy |
2 | default | | Staff Response |
3 | default | | Check In |
4 | default | | Check Out |
5 | default | | Travel Desk |
6 | default | | Door Man |
7 | default | | Restaurant Ambiance |
8 | default | | Quality Of Food |
9 | default | | Cleanliness Of The Room |
10 | default | | Room Size |
11 | default | | Room Amenities |
12 | default | | Any Other Comments ? |
13 | client | | How is Food? |
14 | client | | test question |
15 | client | | test1 |
16 | client | | test2 |
17 | client | | test2 |
18 | client | | test2 |
19 | client | | working |
20 | client | | sss |
21 | client | | ggggg |
22 | client | | this is new question |
23 | client | | dddddddddddd |
24 | client | | ggggggggggggggggg |
25 | client | | what is a class? |
26 | client | | what is inheritance |
27 | client | | what is an object |
28 | client | | what is an abstract class? |
29 | client | | qqqq |
30 | client | | nnnn question |
31 | client | | add some |
32 | client | | general question |
33 | client | | category question |
34 | client | | hhhhhhhh |
35 | client | | this is hos |
36 | client | | gggg |
37 | client | | dddd |
38 | client | | ddddd |
39 | client | | bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb |
40 | client | | ggg |
----+----------+------------+------------------------------------------------------------+--
what I know I have pid of parent_question table;
What I want question of question table;
for example.If I were given to find the question whose pid is 18.
So from the parent_question table I can know qid is 39 and 40 and from the question table 39 refers to bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb and 40 refers to ggg
What i tried
String queryString="SELECT distinct q FROM Question q , ParentQuestion pq ,Parent p where pq.qid.id = q.id and p.id = pq.pid.id and p.category = 'default' AND p.id = "+pid;
Query query=entityManagerUtil.getQuery(queryString);
List questionsList = query.getResultList();
return questionsList;
but it did not work.I mean I get nothing in the list.Can anybody point out my mistake.
question entity class
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Version;
import javax.validation.constraints.Size;
import org.springframework.beans.factory.annotation.Configurable;
#Configurable
#Entity
public class Question {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name = "id")
private Long id;
#Version
#Column(name = "version")
private Integer version;
private String question;
private String category;
private boolean isDeleted;
public boolean isDeleted() {
return isDeleted;
}
public void setDeleted(boolean isDeleted) {
this.isDeleted = isDeleted;
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public Integer getVersion() {
return version;
}
public void setVersion(Integer version) {
this.version = version;
}
public String getQuestion() {
return question;
}
public void setQuestion(String question) {
this.question = question;
}
public String getCategory() {
return category;
}
public void setCategory(String category) {
this.category = category;
}
}
parent entity class
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.ManyToOne;
import javax.persistence.Version;
import javax.validation.constraints.NotNull;
import javax.validation.constraints.Size;
import org.springframework.beans.factory.annotation.Configurable;
#Configurable
#Entity
public class Parent {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name = "id")
private Long id;
#Version
#Column(name = "version")
private Integer version;
#ManyToOne
private Client cid;
public Client getCid() {
return cid;
}
public void setCid(Client cid) {
this.cid = cid;
}
private boolean isDeleted;
/* #ManyToOne
private Client cid;
public Client getCid() {
return cid;
}
public void setCid(Client cid) {
this.cid = cid;
}*/
public boolean isDeleted() {
return isDeleted;
}
public void setDeleted(boolean isDeleted) {
this.isDeleted = isDeleted;
}
private String name;
private String category;
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public Integer getVersion() {
return version;
}
public void setVersion(Integer version) {
this.version = version;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getCategory() {
return category;
}
public void setCategory(String category) {
this.category = category;
}
}
Commenters are confused by your classes, and this is a good indication that you might need to think about your design. suninsky has a good point that you may not need to have an entity class call ParentQuestion (unless ParentQuestion has extra data about the relationship, of course). Here are some typical questions I would be asking.
Does every Question have a Parent? If so then there should probably be a parent property on your Question class, mapped as #ManyToOne
Does every a Parent object have a set of Questions? If yes, then the Parent object should probably have a property named questions, the type of which is some kind of collection of Question objects.

Categories

Resources