I'm new to h2 db. I'm populating (inserting) some data in Table A. So whenever I add some data in Table A ,How would I ensure that Table B should also get updated.
Table A
Prim_Key1 INDEX NAME
1 1 A
2 2 B
3 3 C
Table B
Prim_Key2 INDEX Value
Prim_key2--> Populating form XYZ table INDEX--> populating from INDEX
of Table A
Here the thing is I'm populating INDEX of Table A with java trigger. SO whenever there is new value in Table A, Table B is not getting updated accordingly
Is there any solution to this?
Try this:
FOREIGN KEY(INDEX) REFERENCES TABLEA(INDEX) ON UPDATE CASCADE
This should update the foreign keys as soon as the primary keys in the parent table are updated.
If you want to update some other data(apart from foreign key) as well, then you should try setting up another trigger.
Related
I have a table Facility header and i want to alter it and add few columns.I want the newly added columns to hold default value as null. My table is already loaded with 14 years of data .As it is partitioned table for year 2002-2014 by default the value of these newly added column should come null in the table .
create table facility_HEADER
(
A string,
B INT,
C INT
)partitioned by (year int comment 'Date Year Incurred')
STORED AS PARQUET
Alter Table Command
ALTER TABLE facility_HEADER add columns (MSCLMID Bigint,NPI STRING,UNITS decimal(10,2));
When i put a describe on the table i can see the columns got appended at the end .
When i put a select * from any of the partition it gives error.
Failed with exception
java.io.IOException:org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot
be cast to org.apache.hadoop.io.LongWritable
My table is having 14 years of data and i don't want it to be done with putting null into select clause and giving aliases.
I tried the things referring from here and from here.
Can any one help me what actually happened with my table .I lost 14 years of data in a go.
Make a full files backup first. Try to alter table and drop newly added columns. If you didn't write into table yet, it should work. Check the table is selectable again. Then create new table with new columns and insert overwrite into.
Using alter command on hive it just changed metadata not under laying data so while select * from table will fail.
As hive is trying to extract data from file stored "/apps/hive/warehouse/databasename.db/tablename/" using using row_format and file_format it will not able to find column values as described on row_format .
Note : Data is STORED AS PARQUET hive is not getting definition of new column on PARQUET
work around: create new table and insert data and rename table as oldtablname
INSERT INTO TABLE
selet columns(old) null as MSCLMID ,null as NPI, null as UNITS from oldtabel
I am updating 2500 records' String primary key using the direct sql below (run from java):
update childtable set column = 'new value' where column = 'old value'
I also have a parent table with the same string column and the child table has a foreign key to the parent table (childtable.column -> parenttable.column).
What I see is that running the above sql statement on 2500 rows takes consistently around 6 seconds and 300 milliseconds. I have also tested and verified that it takes this long to perform this operation in some test scenarios using an external database tool (Squirrel).
If I do only one of any of the following, the runtime of the above sql goes to 400 milliseconds or so always.
-remove the foreign key from the child to the parent
-add an index on the child table on the child table's single column primary key and the column in the child table being referenced in the foreign key. Ie this is a composite index on 2 columns.
I am satisfied with the fix of adding a composite index, but this fix doesn't work anymore in a more complicated example where the child table has a composite primary key which includes the column that references the parent table and there is a foreign key still on this column.
I would like to know how I can improve the performance of this scenario. I am expecting that it should not be this slow to perform this update 400 ms to 6300 ms is a huge performance loss for something that should be simple in my mind.
I have 2 master tables 'A', 'B' and one main table 'C' which contains the foreign key of the A and B. I have done all the annotation mapping in the pojo class and used cascadeType. And used many to one relationship from the C to A and B table pojo class. When I try to insert in the C table, A and B master table getting inserted. That's fine. If the value exists already in the master table, Just I need to insert the foreign key in the C table. I mean, the duplicate entry should not happens in the master table's.
1) If I set the unique key constraints, hibernate stop in the master table insertion itself.
2) I don't want to check a condition whether the value exists in the master table or not and get the primary key if the value exists in the master table and update it in the main table. Without those condition check, I am trying to achieve it in annotation itself.
This is just a example. I have 4 foreign key relationship in my table. So, I am trying to avoid those conditions.
Can any one help me on this ?
I hope the question is clear. If any other information needed, Kindly let me know.
Thanks in advance.
I have two tables, let call it table A and B. Table A has a foreign key of table B. My system creates first a row in table B, and in another screen user can create a row in table A that is related to the created row in table B.
These two rows need be send to a specific SymmetricDS client, to do this I'm using a subselect router for each table. The problem is: the row created in table B only know where it need go when the row in table A is created. When it happens, SymmetricDS already evaluated the subselect router of table B and considered the batch as unrouted. As the row of table B was not routed, the client can't create the row in table A due a foreign key error.
Is there a way I can guarantee that the two rows will synchronize together?
yes there is. use trigger customization. you'll have to wait until the version 3.7 is released or take the latest version of the source, apply the patch http://www.symmetricds.org/issues/view.php?id=1570 and declare before trigger customization for the table A which will update the row with the foreign key in the table B and have it being routed to the target before the row in table A
i´m currently working on my first Java application based on a MySQL DB. I´m using EclipseLink2.0 and NetBeans, at the time i am facing a behaviour i cannot explain, maybe someone has stumbled over this problem in the past and can help me with this. Here goes:
Table 1 has the PK of table 2 as Fk. Application-side, there is an UI where users can generate content for table 1. The value for the fk(Table2ID) is beeing chosen with a dropdown menu, which gets each values by reading the Collection of table2 rows. Now, when i try to change the value for the fk to another (already existing) value, instead of doing just that, a new row with a fresh ID is generated on table2, all other column values are cloned from the row i tried to change the Fk to. So, for example, when i try to set table1 rows 3,4 and 5 to table1.fkcolumn =6 (i.e Table2ID=6), the program instead clones the row with ID=6 3 times and sets each of the table1 columns to one of them.
Any help would be greatly appreciated .
The problem is you are changing the primary key of an entity. In EclipseLink, when you change the PK of an entity, you have a new entity. As such, Eclipselink inserts the new rows and leaves the old rows alone.
To get around this you have three choices:
1) Change the database. Primary keys really shouldn't be changed.
2) Set the application to execute an update query which changes the primary key values and requery them.
3) Delete the old rows and re-create with a new primary key.