I have two tables, let call it table A and B. Table A has a foreign key of table B. My system creates first a row in table B, and in another screen user can create a row in table A that is related to the created row in table B.
These two rows need be send to a specific SymmetricDS client, to do this I'm using a subselect router for each table. The problem is: the row created in table B only know where it need go when the row in table A is created. When it happens, SymmetricDS already evaluated the subselect router of table B and considered the batch as unrouted. As the row of table B was not routed, the client can't create the row in table A due a foreign key error.
Is there a way I can guarantee that the two rows will synchronize together?
yes there is. use trigger customization. you'll have to wait until the version 3.7 is released or take the latest version of the source, apply the patch http://www.symmetricds.org/issues/view.php?id=1570 and declare before trigger customization for the table A which will update the row with the foreign key in the table B and have it being routed to the target before the row in table A
Related
I'm new to h2 db. I'm populating (inserting) some data in Table A. So whenever I add some data in Table A ,How would I ensure that Table B should also get updated.
Table A
Prim_Key1 INDEX NAME
1 1 A
2 2 B
3 3 C
Table B
Prim_Key2 INDEX Value
Prim_key2--> Populating form XYZ table INDEX--> populating from INDEX
of Table A
Here the thing is I'm populating INDEX of Table A with java trigger. SO whenever there is new value in Table A, Table B is not getting updated accordingly
Is there any solution to this?
Try this:
FOREIGN KEY(INDEX) REFERENCES TABLEA(INDEX) ON UPDATE CASCADE
This should update the foreign keys as soon as the primary keys in the parent table are updated.
If you want to update some other data(apart from foreign key) as well, then you should try setting up another trigger.
I have 2 master tables 'A', 'B' and one main table 'C' which contains the foreign key of the A and B. I have done all the annotation mapping in the pojo class and used cascadeType. And used many to one relationship from the C to A and B table pojo class. When I try to insert in the C table, A and B master table getting inserted. That's fine. If the value exists already in the master table, Just I need to insert the foreign key in the C table. I mean, the duplicate entry should not happens in the master table's.
1) If I set the unique key constraints, hibernate stop in the master table insertion itself.
2) I don't want to check a condition whether the value exists in the master table or not and get the primary key if the value exists in the master table and update it in the main table. Without those condition check, I am trying to achieve it in annotation itself.
This is just a example. I have 4 foreign key relationship in my table. So, I am trying to avoid those conditions.
Can any one help me on this ?
I hope the question is clear. If any other information needed, Kindly let me know.
Thanks in advance.
I have a question about hibernate constrains, about one issue that I never had before.
Imagine that I have a table(Snapshot) where I can add some snapshot rows, which every single one have to be related with just one row of another tables. But this relationship is not only with one table, multiple tables can join with this snapshot table. But I want to prevent that once one row of snapshot is already link with another row table let´s say:
A.row1->Snapshot.row1
It´s not possible that another table pick up the same row for his relationship
B.row1->Snapshot.row1.
Because if dont, imagine the issue when I´m trying to do a delete on cascade on A.
Any idea how to make this work with hibernate unique constraints
in Snapshot make the field that links to the other table (i suppose it's called row1) unique.
#Column(unique = true)
edit:
you can not control how many other tables are using your primary key. what you can do is introduce a new table where you manage your linking. there you coudl have to columns one called link_from and the other one link_to and make link_to unique.
I have two tables A and B. My application continuously executes transactions that consist of:
Insert rows in table B.
Update a row in table A.
(The two steps belong to the same transaction to keep table A and B mutually consistent.)
At any time t, I need a way to get a snapshot of the tables. More particularly, at any time t, I need the value of a particular row in table B, and I need the rows inserted in table A during the transaction that last updated the row of table B.
For example, at time t0, my tables have the following content:
Table A => (rowA1)
Table B => (rowB11, rowB12)
The rows rowB11 and rowB12 have been inserted inside the transaction that updated the row in table A to the state rowA1.
At time t1, the transaction is executed again, and my tables have now the following content:
Table A => (rowA2)
Table B => (rowB11, rowB12, rowB21, rowB22)
The rows rowB21 and rowB22 have been inserted inside the transaction that moved the row in table A from state rowA1 to state rowA2.
Now, at any time t, I would like to select the row in table A (i.e. now it's rowA2) and also to select the rows that have been inserted to reach state rowA2 (i.e. rowB21 and rowB22). What I don't want, is to select the row in table A (i.e. rowA2) and to get rows rowB31 and rowB32 from table B since the state I got from table A doesn't match these inserted rows (that just have been inserted during a still running transaction).
I hope my question is clear enough.
I precise I'm using MySQL and I manage my transactions using Spring.
Thanks,
Mickael
EDIT:
Finally, simply using transactions with a transaction level that is at least READ_COMMITTED is not enough. If between the the two SELECTs (the one to get current state of a row in table A and the one to get the rows associated to this state in table B), one or more other transactions are executed (i.e. one or more executions of steps 1-2), the rows fetched from table B will not correspond to the state of the row previously fetched from table A.
Add a column in B, that allows you to match rows in B with a specific status in A:
Time t0:
Table A => (rowA1)
Table B => (rowB11, rowA1), (rowB12, rowA1)
Time t1:
Table A => (rowA2)
Table B => (rowB11, null), (rowB12, null), (rowB21, rowA2), (rowB22, rowA2)
At t1, the rows in B you want are something like SELECT * FROM B WHERE ref_to_A = [current_value_in_A].
It appears that your question was related to transactions isolation, after all. So here we go:
Anything that happens during a transaction (unless isolation level is READ_UNCOMMITTED), i.e. between BEGIN and COMMIT (or ROLLBACK), is invisible to concurrent transactions.
i´m currently working on my first Java application based on a MySQL DB. I´m using EclipseLink2.0 and NetBeans, at the time i am facing a behaviour i cannot explain, maybe someone has stumbled over this problem in the past and can help me with this. Here goes:
Table 1 has the PK of table 2 as Fk. Application-side, there is an UI where users can generate content for table 1. The value for the fk(Table2ID) is beeing chosen with a dropdown menu, which gets each values by reading the Collection of table2 rows. Now, when i try to change the value for the fk to another (already existing) value, instead of doing just that, a new row with a fresh ID is generated on table2, all other column values are cloned from the row i tried to change the Fk to. So, for example, when i try to set table1 rows 3,4 and 5 to table1.fkcolumn =6 (i.e Table2ID=6), the program instead clones the row with ID=6 3 times and sets each of the table1 columns to one of them.
Any help would be greatly appreciated .
The problem is you are changing the primary key of an entity. In EclipseLink, when you change the PK of an entity, you have a new entity. As such, Eclipselink inserts the new rows and leaves the old rows alone.
To get around this you have three choices:
1) Change the database. Primary keys really shouldn't be changed.
2) Set the application to execute an update query which changes the primary key values and requery them.
3) Delete the old rows and re-create with a new primary key.