I am using Spring 4.1.6, and I have my service working fine with Hibernate. In the root of the project I've got my schema.sql which is being run every time I run the server. The problem is that first time I ran the server, I put some data in db, and when I restarted it, the script was executed again and I lost all that data that I loaded before restart.
So, I think that I have two options two solve this problem:
Edit sql script to execute all queries just in case they do not exist (which would be more laborious since I have to edit the script every time I export my db)
Tell hibernate, by some way, to execute sql script just in some cases. That would be great if there existed some config that executes the script just in case the data base doesn't exist.
Do you know if this is even possible? Thanks in advance.
It sounds like this is the perfect use-case for a tool called Liquibase. This is basically a version control tool for your database which allows you to define changes to your schema and/or data and ensures that these changes are only applied once.
It's incredibly useful if multiple people are changing the same database schema and ensures that your database is always valid for the version of the code that you have checked out/released etc.
Related
I have a very strange problem. I have a Spring Boot application with a JpaRepository and hibernate to handle persisting simple Java objects. These only have a number of string properties. No Relationships, no special subclasses, very very simple. I am using the Spring Boot 2.3.7.Release. I tested this effect both with an H2-Database as well as PostegreSQL. Done this hundreds of times before.
On my development notebook running on windows 10, everything works as expected - I can save entities, delete them. But when I try to deploy the application on my linux server (centOS), the delete and insert statements are simply not executed.
For example, if I call the save function, I would expect first a select statement (to check, if the entry is already present), followed by insert (as it is not in my case). And that's exactly what I see on my notebook. On the server I see the initial select statement is executed, but the following insert is not. It is not even tried.
The same holds true for deleteAll() - I see the select statement, but the following delete statements are missing. And I do not get any error message. Nothing. Hibernate is just omitting the delete and insert statements.
I can insert data into the database manually with the same user used in my application and the program even does pick the saved data up - I can retrieve them and display them. So the connection to the database works, the mapping works, it's just as if hibernate is in some kind of read-only mode, omitting all delete and insert statements.
Any idea about that? I already checked all available configurations (that are the same, besides some environment variables, which only provide the passwords).
I am simply running out of ideas where to look. Thanks a lot in advance!
I finally found the solution - another datasource was autoconfigured by a part of the application (spring batch) and it was a race between the two, which one got initialized first. On my Laptop the postgres jpa won resulting in a working system, on the server, the other one worked, resulting in the problems.
The only difference I found in the debug log was a longer time for the JPA initialization...
After manually connecting Spring batch to the postgresql, it works.
We have a web-based application in dev phase where we use Spring 5, JPA(Hibernate) and Postgresql 9.4
Till this moment we were using one instance of the posgresql db for our work. Basically, we don't have any schema generation script and we simply were updating the db if we needed some new table, column etc. For the Hibernate we were generating classes from the db.
Now when we have some amount of test data and each change in the db brings a lot of trouble and confusion. We realized that we need to create and start maintaining some schema generation file along with some scripts which generate test data.
After some research, we see two options
Create two *.sql files. The first will contain the schema generation script the second one SQL to create test data. Then add a small module with a class which will execute the *.sql files using plain jdbc. Basically, we will continue developing and whenever we made some changes we quickly wipe->create->populate the db. This approach looks the most appealing to us at this point. It quick, simple, robust.
Second is to set up some tool which may help with that e.g. Liquibase
This approach also looks good in terms of versioning support and other capabilities. However, we are not in production yet, we are in an active development phase. We don't have much of the devs who do the db changes and we are not sure how frequently we will update the db schema in production, it could be rare.
The question is the following. Would the first approach be a bad practice and applying the second one will give the most benefits and it worth to use it?
Would appreciate any comments or any other suggestions!
First approach is NOT a bad practice, until this generation. But it will be considering the growth of tools like Liquibase.
If you are in the early or middle of the Development Phase, go ahead with LiquiBase, along with Spring Data. Contrarily, in the closing stages of the Development Phase, Think you real need for it.
I would suggest second approach as it will automatically find the new script as you add and execute the script on startup. Moreover, when you have tools available like liquibase and flyway why reinvent the wheel ?.
2nd approach will also reduce the un-necessary code for manually executing the *.sql files. Moreover this code also needs testing and if updated can be error prone.
Moreover 1st approach where you write manual code to execute script also has to check which scripts needs to be executed.. If you already has existing database and you are adding some new scripts you need to execute those new scripts only. These things are taken care of automatically with 2nd approach and you don't need to worry about already executed script being executed again
Hope this answers your concern. Happy coding
I am developing a very large scale J2EE application and we chose to use Derby as an embedded database for junit testing since hitting the actual prod database will slow down our tests. When I bootstrap my application, the Derby DB will create all the tables so I can run JDBC queries against it. It works fine but the drawback is I cannot actually query any of the tables except through JDBC calls at runtime, so if I need to make changes to my queries, I need to stop the app, modiify my query statements, then restart the application and run in debug. This process makes it very difficult when it comes to analyzing complex queries. Does anyone know of some kind of Derby plugin that can help me to query the DB without doing it through my java code?
If you are using Maven for your build, you can use the derby-maven-plugin, which I wrote and is available on GitHub and via Maven Central. It will take care of starting and stopping the database for you before your tests. You will need to populate this database, yourself of course. You will also have the database in your target/derby folder after the tests execute, so you can always query the data yourself afterwards. This will help you work in a separate development environment which doesn't affect the production database.
You can check here for my answer to a similar question.
We have a system made in java using a postgres database.
This database changes often, and once a week or less we are updating it. These changes are in the struture of the DB (DDL), usually in functions and fields to add new functionality.
For the changes in the DB we usually use navicat as follows:
1- We made the change in the structure of the DB using navicat and we copy the SQL that gives us to an XML file for each change we made.
2- When we have to update the DB in production we check files, identified by a version number, and update the DB.
3- Then we repeat this for each DB installed (30 in total)
The problem that we are having is that as the whole process is manual and is very easy to forget to copy a change to the XML so when we use it the script does not work or even worse when the system needs this change fails.
Therefore we are looking for a way to automate this task and we came with the following idea:
1- We make changes in navicat
2- Configure the postgres to LOG the changes in the DDL into a CSV file
3- Later we read the CSV file and pass the changes to the XML to update the producction DB
The problem we are having is that the LOG will save all attempts to change the structure, including errors so if we use that script to update it will fail too.
Is there some way to save only successful DDL changes in the log in postgres?
Is there a script or application to get the DDL changes and put it in script automatically?
Is there a better way to automate this process?
there are many answers for the questions above :-) i have managed rapidly changing databases using a number of schemes. one way to do it is maintain a master database (like you have). Use dbtoyaml to create a yaml description of the database. Then use yamltodb on all of the (30) targets, which will do everything necessary to make the target databases look exactly like the master. I have used this software for about 6 months, it is fantastic. pyrseas. -g
I am currently responsible for migrating data for our application, for upgrading to new version.I am trying to migrate from HSQL to HSQL, later we will move on to other combinations.
So I have a stand alone utility to do this. I am using MockServletContext to initialize my services(this migration is to be done without starting the servers).
The problem is that all the tables are migrated except for 2-3 tables, the number depending on size of the data migrated. On extensive debugging I found nothing wrong. Meaning that all the data is getting migrated on debugging via eclipse, but on normal running it fails to complete for the last 3 tables.
Any clue where to look at?
In normal run I have put loggers to see if we are reading all the data from the source database and indeed the logs prove we do.
The only place where I am unable to put logs is when it calls a method in driver.
In the last step we give a call to PreparedStatement object's executeBatch()/executeUpdate() methods(Tried with both but exactly same result).
I am completeley clueless what to do and where to look for. Any suggestions?
Thanks
In normal run I have put loggers to see if we are reading all the data from the source database and indeed the logs prove we do. The only place where I am unable to put logs is when it calls a method in driver.
If you suspect something wrong there, try wrapping your driver in log4jdbc. It will show the SQL issued to DB. Good luck!