So, I am currently trying to use Liquibase inside a java application. For now it works fine, I built myself a little tool, which creates and updates a h2 Database without any problems. The problem arises when I try to use formatted SQL changeSets.
I use a xml MasterChangelog. In this changelog I specify two changeLogs: One XML which contains the Database Structure and creates all needed tables for me, the second is a SQL changeLog containing all my insert statements with all the data.
updating works and the data is put into the database, problem I can't rollback. So I looked it up and learned that I need to put in a rollback statement for every changeset, which I did but it still doesn't work
Example: SOME_TABLE('ID_TABLE1', 'ID_TABLE2', 'someInput')
--changeset my.name:someId
INSERT INTO "PUBLIC"."SOME_TABLE" VALUES('someId', 'someOtherID', 'someInput');
--rollback DELETE FROM SOME_TABLE WHERE ID_TABLE1='someId'
this doesn't work
--changeset my.name:someId
INSERT INTO "PUBLIC"."SOME_TABLE" VALUES('someId', 'someOtherID', 'someInput');
INSERT INTO "PUBLIC"."SOME_TABLE" VALUES('someId', 'differentId', 'someInput');
INSERT INTO "PUBLIC"."SOME_TABLE" VALUES('someId', 'IdFromHell', 'someInput');
--rollback DELETE FROM SOME_TABLE WHERE ID_TABLE1='someId'
or this
I added a semicolon after the rollback, left them away after the inserts, making every insert its own changeset or grouping them together like above, but nothing works. I always get the infamous: "No inverse to liquibase.change.core.RawSQLChange created".
I tried these inside the h2 database and the statements for themselves work fine. I don't get what is wrong, whether using the console or my java logic it doesn't work. Does anybody know what's wrong?
If so, I am thankful for every hint I can get.
Related
I have a question. Where did these methods go?
Dialect.supportsTemporaryTables();
Dialect.generateTemporaryTableName();
Dialect.dropTemporaryTableAfterUse();
Dialect.getDropTemporaryTableString();
I've tried to browse git history for Dialect.java, but no luck. I found that something like
MultiTableBulkIdStrategy was created but I couldn't find any example of how to use it.
To the point...I have legacy code (using hibernate 4.3.11) which is doing batch delete from
multiple tables using temporary table. In those tables there may be 1000 rows, but also there may
be 10 milion rows. So just to make sure I don't kill DB with some crazy delete I create temp table where I put (using select query with some condition) 1000 ids at once
and then use this temp table to delete data from 4 tables. It's running in while cycle until all data based on some condition is not deleted.
Transaction is commited after each cycle.
To make it more complicated this code has to run on top of: mysql, mariadb, oracle, postgresql, sqlserver and h2.
It was done using native SQL, with methods mentioned above. But not I can't find a way how
to refactor it.
My first try was to create query using nested select like this:
delete from TABLE where id in (select id from TABLE where CONDITION limit 1000) but this is way slower as I have to run select query multiple times for each delete and limit is not supported in nested select in HQL.
Any ideas or pointers?
Thanks.
The methods were present in version 4.3.11 but removed in version 5.0.0. It seems a bit unusual that they were removed rather than deprecated - the background is on this Jira ticket.
To quote from this:
Long term, I think the best approach is to remove the Dialect method
intended to support table tabled in a piecemeal fashion and to make
MultiTableBulkIdStrategy be a fully self-contained contract.
The methods were removed in this commit.
So it seems that getDefaultMultiTableBulkIdStrategy() is the intended replacement for these methods - but I'm not entirely clear on how, as it currently has no Javadoc. Guess you could try to work it out from the source code ...or if all else fails, perhaps try to contact Steve Ebersole, who implemented the change?
I am new to Hibernate, I could see that Hibernate throws StaleObjectStateException while multiple users trying to persist the complete entity. But, Most of the DB updates I have done using HQL update query. Now, I have added an extra condition to the HQL update queries as 'where version = :currentVersion' to identify no other user updated the particular record. It seems to be working fine. But, problem is that I have many number of queries and I also have to synchronize the version number in my java object as same in DB. Is there any simple way to get 'StaleObjectStateException' on HQL query update during multi user updates?
You understood this wrong. Hibernate throws StaleObjectStateException as a wanted behaviour for "while multiple users trying to persist the complete entity". This prevents that the last writer wins and overrides the data from his antecessors. Usually you catch this Exception and show some Error Message to the User like "Someone has changed the Data. Please Retry!". Your HQL query clause is the wrong way and will force you to patch your code even more and more.
I am currently using the automatically created class and Entity manager which is created when a table is bound to a database from NetBeans to get and set values to a derby database.
However when I want to update/edit the field using:
LessonTb Obj = new LessonTb();
Obj.setAdditionalResources(Paths);
Obj.setDescription(LessonDescription);
Obj.setLessonName(LessonName);
Obj.setLessonPath(LessonName + ".txt");
Obj.setRecommendedTest(RecommendedTest);
EUCLIDES_DBPUEntityManager.getTransaction().begin();
EUCLIDES_DBPUEntityManager.getTransaction().commit();
lessonTbList.clear();
lessonTbList.addAll(lessonTbQuery.getResultList());
The current Entry does not update in the database despite knowing that the code worked in other projects. I use the same get and set methods from the same LessonTb class which works to add a new entry and delete and entry.
What could possibly be wrong and how do I solve my problem? No exceptions are thrown.
Here's several possibilities. Perhaps you can do more research to rule at least some of them out:
You're using an in-memory database, and you didn't realize that all the database contents are lost when your application terminates.
You're not in auto-commit mode, and your application failed to issue a commit statement after making your update
You're not actually issuing the update statement that you think you're issuing. For some reason, your program flow is not reaching that code.
Your update statement has encountered an error, but it's not the sort of error that results in an exception. Instead, there's an error code returned, but no exception is thrown.
There are multiple copies of the database, or multiple copies of the schema within the database, and you're updating one copy of the database but querying a different one.
One powerful tool for helping you diagnose things more deeply is to learn how to use -Dderby.language.logStatementText=true and read in derby.log what actual SQL statements you're issuing, and what the results of those statements are. Here's a couple links to help you get started doing that: https://db.apache.org/derby/docs/10.4/tuning/rtunproper43517.html and http://apache-database.10148.n7.nabble.com/How-to-log-queries-in-Apache-Derby-td136818.html
preparedStatement.executeUpdate()
Returns the number of rows updated. To my research so far it's not possible to do an update-query in which you would retrieve the updated rows, but this seems like such a basic feature that I'm clearly missing something. How to accomplish this?
Per first comment on question this is simply not possible in MySQL. PostgreSQL supports UPDATE...RETURNING as this feature.
If you use executeQuery instead of executeUpdate, you get a resultset back.
Then, change your stored procedure to be a function, and return the changed rows in a select at the end of the function. AFAIK, you cannot return data from a procedure in MySQL (as opposed to e.g. Microsoft SQL server).
EDIT: The suggestion struck out above is not possible. The JDBC specification does not allow updates in query statements (see the answer for this one: http://bugs.mysql.com/bug.php?id=692).
BUT, if you know the WHERE clause of the rows you are about to update, you can always select them first, to get the primary keys, perform the update, and then perform a select on them afterwards. Then you get the changed rows.
when you fire preparedStatement.executeUpdate() you already have the row identifiers using which you can uniquely identify the rows you want updated- you need to use the same identifiers to do a query and fetch the updated rows. you can not accomplish update and retrieval in one shot using JDBC apis.
There is a UNIQUE database constraint on an index which doesn't allow more than one record having identical columns.
There is a piece of code, managed by Hibernate (v2.1.8), doing two DAO
getHibernateTemplate().save( theObject )
calls which results two records entered into the table mentioned above.
If this code is executed without transactions, it results INSERT, UPDATE, then another INSERT and another UPDATE SQL statements and works fine. Apparently, the sequence is to insert the record containing DB NULL first, and then update it with the proper data.
If this code is executed under Spring (v2.0.5) wrapped in a single Spring transaction, it results two INSERTS, followed by immediate exception due to UNIQUE constraint mentioned above.
This problem only manifests itself on MS SQL due to its incompatibility with ANSI SQL. It works fine on MySQL and Oracle. Unfortunately, our solution is cross-platform and must support all databases.
Having this stack of technologies, what would be your preferred workaround for given problem?
You could try flushing the hibernate session in between the two saves. This may force Hibernate to perform the first update before the second insert.
Also, when you say that hibernate is inserting NULL with the insert, do you mean every column is NULL, or just the ID column?
I have no experience in Hibernate, so I don't know if you are free to change the DB at your will or if Hibernate requires a specific DB structure you cannot change.
If you can make changes then you can use this workaround in MSSQL tu emulate the ANSI behaviour :
drop the unique index/constraint
define a calc field like this:
alter table MyTable Add MyCalcField as
case when MyUniqueField is NULL
then cast(Myprimarykey as MyUniqueFieldType)
else MyUniqueField end
add the unique constraint on this new field you created.
Naturally this applies if MyUniqueField is not the primary key! :)
You can find more details in this article at databasejournal.com