I have been trying for days to persists data obtained from a jtable from an imported Excel sheet without success and trying to find the exception is sickening.
here is some part of the code and the error stack trace
#Action
public void persist(){
emf = Persistence.createEntityManagerFactory("MauranaSurveyPU");
em = emf.createEntityManager();
em.getTransaction().begin();
//loops through table to retrieve object and persist
int count = jTable1.getRowCount();
for(int i=0; i<count; i++){
Mauranagroup mn = new Mauranagroup();
String obj1 = (String)GetData(jTable1,i,0);
String obj2 = (String)GetData(jTable1,i,1);
String obj3 = (String)GetData(jTable1,i,2);
//set entity
mn.setRespondentId(Integer.parseInt(obj1));
mn.setMale(obj2);
mn.setFemale(obj3);
em.persist(mn);
}//end for
em.getTransaction().commit();
}//end method persist
// get object from jtable
private Object GetData(JTable jTable1, int x, int y) {
return jTable1.getModel().getValueAt(x,y);
}
The problem with this code is that it actually persists ,but after the transaction commits
i get this stack trace;
Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The statement was aborted because it would have caused a duplicate key value in a unique or primary key constraint or unique index identified by 'SQL130204062549290' defined on 'MAURANAGROUP'.
Error Code: 20000
Call: INSERT INTO MAURANAGROUP (RESPONDENT_ID, AMOUNTTOBESPENT, AREYOUFAMILIARNO
bind => [211 parameters bound]
Query: InsertObjectQuery(entity.Mauranagroup[ respondentId=5 ])
when i delete the records and persist again, i get another line
bind => [211 parameters bound]
Query: InsertObjectQuery(entity.Mauranagroup[ respondentId=2 ])
at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:324)
i dont understand is it,it actually persist and i can see my saved data. is it the loop or is the loop not terminating well,im actually sick because of this project.
There is a DB constraint on your table that some column is unique and you are inserting the duplicate value. I guess it is the RESPONDENT_ID. If you are trying to persist all the record in the table for the first time, I wouldn't provide the id from the table data. Instead I would use sequence to generate the id for you and return it to the UI.
If you are trying to modify the existing data stored in the table. I would search them in the DB using the id, update the fields and persist them again.
btw. it is a good practice to name your methods with lowercase at the beginning.
Related
I have a jqGrid(latest free jqGrid version) table filled with data from the database(MS SQL) using a Java REST service. One of the jqGrid columns its a drop down list with 6 options.
My asignment is to create another database table that contains the Drop Down List values and using foreign keys / primary keys i have to automaticly populate the first table DDL values.
I am failing to understand the logic behind it. Can someone expain this to me? How can i achieve this. Do i only send a ID from jqGrid and depending on that ID(1,2,..,6) it chooses what to set in table#1 DDL column(comparing the id sent with the ID's of the table that contains the DDL Values)?
I am getting the feeling i am not expressing myself well... Hope you guys understand me.
We can start with the database table. It could look like
CREATE TABLE dbo.OrderStatus (
Id int IDENTITY NOT NULL,
Name nvarchar(100) NOT NULL,
CONSTRAINT PK_LT_OrderStatus PRIMARY KEY CLUSTERED (Id),
CONSTRAINT UC_LT_OrderStatus_Name UNIQUE NONCLUSTERED (Name)
)
It allows to address any Item of such OrderStatus table by Id or by Name. The UNIQUE CONSTRAINT don't permit to add name duplicates. Another table Order can have column
CREATE TABLE dbo.Order (
Id int IDENTITY NOT NULL,
OrderStatusId int NOT NULL,
...
)
ALTER TABLE dbo.Order WITH CHECK ADD CONSTRAINT FK_Order_OrderStatus
FOREIGN KEY(OrderStatusId) REFERENCES dbo.OrderStatus (Id)
During filling the grid with the data you have two main options: using OrderStatusId in the data or the usage of the corresponding Name from dbo.OrderStatus:
SELECT Id,OrderStatusId, ... FROM dbo.Order
or
SELECT Id,os.Name AS OrderStatus, ...
FROM dbo.Order AS o
INNER JOIN dbo.OrderStatus AS os ON os.Id=o.OrderStatusId
If you decide to fill the grid with ids (OrderStatusId values) then you will have to use formatter: "select" to display the text in the corresponding column (see here). It required that you would have to have editoptions.value filled with all different values from dbo.OrderStatus. The best way to implement this would be to extend the response from the server for filling the grid with your custom data and to use beforeProcessing to set editoptions.value. I described the scenario in the answer. I'll remind you it below.
Let us the response from the server looks like
{
"rows": [{...}, {...}]
}
If the returned data looks just like
[{...}, {...}]
then you should include the wrapping. I suggest that you made
SELECT Id,Name FROM dbo.OrderStatus
additionally to making the main select from dbo.Order (SELECT * FROM dbo.Order) and you place both results in the server response:
{
"orderStatus": [{"id":1, "name":"Pending"}, ...],
"rows": [{...}, {...}]
}
To process orderStatus you need to add the following beforeProcessing, which read orderStatus and set editoptions.value of the orderStatus column of the grid:
beforeProcessing: function (response) {
var $self = $(this), orderStatus = response.orderStatus, i, values = "";
if (orderStatus != null && orderStatus.length > 0) {
for (i = 0; i < orderStatus.length; i++) {
if (values.length > 0) {
values += ";";
}
values += orderStatus[i].id + ":" + orderStatus[i].name;
}
$self.jqGrid("setColProp", "orderStatus", {
editoptions {
value: values
}
});
if (this.ftoolbar) { // filter toolbar exist
$self.jqGrid("destroyFilterToolbar");
$self.jqGrid("filterToolbar");
}
}
}
The above code is not tested, but I hope that the main idea should be clear from it.
Entity with id autogenerated from oracle trigger sequence.
#Entity
#Table(name = "REPORT", schema = "WEBPORTAL")
public class Report {
private Integer id;
....
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator="report_sequence")
#SequenceGenerator(name="report_sequence", sequenceName = "report_id_seq")
#Column(name="REPORT_ID", unique = true, nullable = false)
public Integer getId() {
return id;
}
....
}
Service
#Service("reportService")
public class ReportServiceImpl implements ReportService {
....
#Transactional(readOnly=false)
public void saveOrUpdate(Report report) {
reportDAO.saveOrUpdate(report);
}
}
DAO
#Repository
public class ReportDAOImpl implements ReportDAO {
....
#Override
public Report save(Report report) {
try {
Session session = sessionFactory.getCurrentSession();
session.save(report);
} catch (Exception e) {
logger.error("error", e);
}
return report;
}
}
And When I call service's saveOrUpdate and then try to reach id of entity I get different value than persisted in database. Values on database with autogeneration all is ok. Any suggestions?
reportService.saveOrUpdate(report);
System.out.println(report.getId());
prints: 4150
but saved id in database is: 84
NOTE: My purpose to get Id comes from that I wanted to save childs with cascade. But foreign key on child was different in database(the values of id that I get with getId()).
And Id generated in database is incremented by 2. EX: 80, 82, 84.
UPDATE:
Oracle trigger for sequence generation
CREATE OR REPLACE TRIGGER REPORT_ID_TRIG
BEFORE INSERT ON WEBPORTAL.REPORT
FOR EACH ROW
BEGIN
SELECT report_id_seq.NEXTVAL
INTO :new.report_id
FROM dual;
END;
ANSWER: Trigger should check if id is null
CREATE OR REPLACE TRIGGER REPORT_ID_TRIG
BEFORE INSERT ON WEBPORTAL.REPORT
FOR EACH ROW
WHEN (new.report_id is null)
BEGIN
SELECT report_id_seq.NEXTVAL
INTO :new.report_id
FROM dual;
END;
DESCRIPTION:
#GeneratedValue is not just a sequence generator. It's bit of HiLo algorithm.When it first requests id from database it multiplies it with 50(it can differ) and next 50 new entities will be given ids consequently, and than next request to database. This is to decrease request to database.
The numbers that I get from java was right numbers that should be saved on report.
Without id null value check Hibernate firstly requested for id from database and sequence.nextval called. When hibernate was persisting it(completing transaction) the database called sequence.next second time and set that value to database. So on ReportDetails there was true id value of report and on the Report id it was id set from database.
The problem is that two separate mechanisms are in place to generate the key:
one at Hibernate level which is to call a sequence and use the value to populate an Id column and send it to the database as the insert key
and another mechanism at the database that Hibernate does not know about: the column is incremented via a trigger.
Hibernate thinks that the insert was made with the value of the sequence, but in the database something else occurred. The simplest solution would probably be to remove the trigger mechanism, and let Hibernate populate the key based on the sequence only.
Another Solution:
check your trigger definition that should be in this format
(WHEN (new.report_id is null) ) is important.
CREATE OR REPLACE TRIGGER TRIGGER_NAME
BEFORE INSERT ON TABLE_NAME
FOR EACH ROW
WHEN (new.id is null)
BEGIN
SELECT SEQUENCE_NAME.NEXTVAL
INTO :new.id
FROM dual;
END
I need to save data into 2 tables (an entity and an association table).
I simply save my entity with the save() method from my entity repository.
Then, for performances, I need to insert rows into an association table in native sql. The rows have a reference on the entity I saved before.
The issue comes here : I get an integrity constraint exception concerning a Foreign Key. The entity saved first isn't known in this second query.
Here is my code :
The repo :
public interface DistributionRepository extends JpaRepository<Distribution, Long>, QueryDslPredicateExecutor<Distribution> {
#Modifying
#Query(value = "INSERT INTO DISTRIBUTION_PERIMETER(DISTRIBUTION_ID, SERVICE_ID) SELECT :distId, p.id FROM PERIMETER p "
+ "WHERE p.id in (:serviceIds) AND p.discriminator = 'SRV' ", nativeQuery = true)
void insertDistributionPerimeter(#Param(value = "distId") Long distributionId, #Param(value = "serviceIds") Set<Long> servicesIds);
}
The service :
#Service
public class DistributionServiceImpl implements IDistributionService {
#Inject
private DistributionRepository distributionRepository;
#Override
#Transactional
public DistributionResource distribute(final DistributionResource distribution) {
// 1. Entity creation and saving
Distribution created = new Distribution();
final Date distributionDate = new Date();
created.setStatus(EnumDistributionStatus.distributing);
created.setDistributionDate(distributionDate);
created.setDistributor(agentRepository.findOne(distribution.getDistributor().getMatricule()));
created.setDocument(documentRepository.findOne(distribution.getDocument().getTechId()));
created.setEntity(entityRepository.findOne(distribution.getEntity().getTechId()));
created = distributionRepository.save(created);
// 2. Association table
final Set<Long> serviceIds = new HashSet<Long>();
for (final ServiceResource sr : distribution.getServices()) {
serviceIds.add(sr.getTechId());
}
// EXCEPTION HERE
distributionRepository.insertDistributionPerimeter(created.getId(), serviceIds);
}
}
The 2 queries seem to be in different transactions whereas I set the #Transactionnal annotation. I also tried to execute my second query with an entityManager.createNativeQuery() and got the same result...
Invoke entityManager.flush() before you execute your native queries or use saveAndFlush instead.
I your specific case I would recommend to use
created = distributionRepository.saveAndFlush(created);
Important: your "native" queries must use the same transaction! (or you need a now transaction isolation level)
you also wrote:
I don't really understand why the flush action is not done by default
Flushing is handled by Hibernate (it can been configured, default is "auto"). This mean that hibernate will flush the data at any point in time. But always before you commit the transaction or execute an other SQL statement VIA HIBERNATE. - So normally this is no problem, but in your case, you bypass hibernate with your native query, so hibernate will not know about this statement and therefore it will not flush its data.
See also this answer of mine: https://stackoverflow.com/a/17889017/280244 about this topic
I am trying to write Criteria in Hibernate, My desired output is if column empfield1's value is not 'REGULARIZE' then update else do not update record.
i have tried below one.
Session session = factory1.openSession();
Criteria criteria=session.createCriteria(EmployeePunch.class);
criteria.add(Restrictions.ne("empField1","REGULARIZE"));
EmployeePunch empPunch = (EmployeePunch)criteria.uniqueResult();
empPunch.setId(empPuncId);
empPunch.setSigninTime(inTime);
empPunch.setSigninDate(dateOfUpdate);
empPunch.setSignoutTime(outTime);
empPunch.setPresent(presentStatus);
empPunch.setLastUpdateBy(empcode);
empPunch.setLastUpdateDate(time);
empPunch.setEmpField1(remark);
session.saveOrUpdate(empPunch);
tx.commit();
but it gives me error
Exception : query did not return a unique result: 527
I think you forget to give id without giving id hibernate will return multiple records with empField1="REGULARIZE"
You should give id as well like below:
Criteria criteria=session.createCriteria(EmployeePunch.class);
criteria.add(Restrictions.ne("empField1","REGULARIZE"))
.add(Restrictions.eq("empPuncId",empPuncId));
Now it will return single matching record and then update it.
That means ,With that criteria there are multiple records are there in your Database.
To know how many records are there,
Try
List<EmployeePunch> emps = (ArrayList<EmployeePunch>)criteria.list();
So that emps will give you a list of EmployeePunch's which meets the criteria.
Then iterate the list and see how many items are there inside database.
Why not use a HQL in this way?
Query query = session.createQuery("update EmployeePunch set signinTime = :signinTime, signinDate = :signinDate where empField1 = 'REGULARIZE').setParameter("signinTime",signinTime).setParameter("signinDate",signinDate);
int updateRecordCount = query.executeUpdate();
Of course, you have to set values for other properties (except for Id if it is your #Id field); in updateRecordCount you get the count of updated records.
please analize the following two codes and tell me why the first one fails with primary key violation when doing commit, and the second one does't.
Code which fails at commit:
try{
Query q = em.createQuery("DELETE FROM Puntaje");
q.executeUpdate();
//em.getTransaction().commit();
//em.getTransaction().begin();
Iterator it = l.iterator();
while(it.hasNext()){
DataPuntaje dp = (DataPuntaje)it.next();
Cliente c = new Cliente(dp.getCliente());
Puntaje p = new Puntaje(dp.getPuntaje(),c);
c.agregarPuntaje(p);
em.merge(c);
}
System.out.println("test1");
em.getTransaction().commit();
System.out.println("test2");
}
Code which works fine:
try{
Query q = em.createQuery("DELETE FROM Puntaje");
q.executeUpdate();
em.getTransaction().commit();
em.getTransaction().begin();
Iterator it = l.iterator();
while(it.hasNext()){
DataPuntaje dp = (DataPuntaje)it.next();
Cliente c = new Cliente(dp.getCliente());
Puntaje p = new Puntaje(dp.getPuntaje(),c);
c.agregarPuntaje(p);
em.merge(c);
}
System.out.println("test1");
em.getTransaction().commit();
System.out.println("test2");
}
The only difference is that the first one does not commit the delete query, but instead commit it all together at the end.
Cliente and Puntaje is a 1:N bidirectional relation with cascade = ALL.
And all the inserted instances of Cliente have the same ID, but merge should be smart enough to update instead of insert after the first one is persisted, but that seems to fail at the first example and i cant find any explanation.
Also im using H2 embedded database.
Also i would like to add, the first code works FINE if there is an already inserted Cliente value, this fails when the table is actually empty and so delete actually is doing nothing.
This is the error im getting:
Internal Exception: org.h2.jdbc.JdbcSQLException: Unique index or primary key violation: "PRIMARY_KEY_5 ON PUBLIC.CLIENTE(NICK)"; SQL statement:
INSERT INTO CLIENTE (NICK) VALUES (?) [23505-169]
Error Code: 23505
Call: INSERT INTO CLIENTE (NICK) VALUES (?)
bind => [cbaldes]
Query: InsertObjectQuery(Clases.Cliente#21cd5b08)
javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.0.2.v20100323-r6872): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: org.h2.jdbc.JdbcSQLException: Unique index or primary key violation: "PRIMARY_KEY_5 ON PUBLIC.CLIENTE(NICK)"; SQL statement:
INSERT INTO CLIENTE (NICK) VALUES (?) [23505-169]
Error Code: 23505
Call: INSERT INTO CLIENTE (NICK) VALUES (?)
bind => [cbaldes]
Query: InsertObjectQuery(Clases.Cliente#21cd5b08
)
These are the tables:
#Entity
public class Puntaje implements Comparable, Serializable {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
private int total;
#ManyToOne(cascade=CascadeType.ALL, optional = false)
#JoinColumn(name="NICK")
private Cliente cliente;
#Entity
public class Cliente implements Serializable {
#Id
private String nick;
#OneToMany(cascade=CascadeType.ALL, mappedBy="cliente")
private List<Puntaje> puntajes;
When you perform operations on the object, all the operations are recorded in the cache ONLY. JPA will prepare an internal list of all objects to be inserted, updated and deleted. Which will be flushed together when flush or commitis called.
Now take your first example. You deleted all Puntaje, which adds all Puntaje in the deleted list. Now when you call merge, i*ts indeed smart enough* and it figured out that it should be inserted and not updated and added in the insert list. When you call commit, it tries to insert the objects from the insert list first and as you can expect, it will fail as old objects are not yet deleted.
Only difference in your second example is that, by force, you are deleting the objects first before insertion and hence it's not failing.
I am sure, it will not fail even if your use flush in place of commit.
Hope this helps you understand the reasoning behind the failure.