I am trying to understand "changing database without changing code". Currently working with micro services using springboot, java, thymeleaf and cloud foundry.
I have a spring boot application and attached a database as a service using cloud foundry.
My problem is I am seeing that the purpose of micro service is allowing the ease to change services without changing code.
Here is where I got stuck
In java I have a sql script, "select * from ORDER where Status = 'ACCEPTED';"
Images source
My database would be attached as a service on cloud foundry using CUPS
"jdbc:oracle:thin:username/password//host:port/servicename"
So let say I want to change this database to CUSTOMER table(take it as a different database). This will throw an error because CUSTOMER table will not have "select * from ORDER where Status = 'ACCEPTED';"
I've changed database, but wouldn't I still have to go back to my code and change the sql script?
My Attempt to resolve this issue
So instead of hard coding my sql script in java "select * from ORDER where Status = 'ACCEPTED';"
I created a system environment variable and set it as sqlScript with value of select * from ORDER where Status = 'ACCEPTED'
Then in java I called the env variable String sqlScript= System.getenv("sqlScript");
So now instead of going back into java to change sql script, user can change it through environment variables.
this is a very dirty method to go around my issue, what would be a better alternative?
I know my logic of understanding is really wrong. Please guide me to the right path.
I think the phrase 'changing database without changing code' doesn't mean that if you add/remove fields in DB you do not have to modify your codebase - it just doesn't make any sense.
What it really means is that you should use good database abstractions, so in case you need to change your database vendor from, let's say, MYSQL to OracleDB your Java code should stay the same. The only thing that may differ is some configurations.
A good example of it is ORM like Hibernate. You write your java code once, no matter what is the SQL Database that you are using underneath. To switch databases the only thing that you need to change is a dialect configuration property (In reality it's not that easy to do, but probably easier than if we were coupled to a one specific DB).
Hibernate gives you a good abstraction over SQL databases. Nowadays we have a new trend - having the abstraction over different DB families like SQL and NoSQL. So in the ideal world, your codebase should stay unchanged even if you want to change MySQL to MongoDB or even Neo4j. Spring Data probably is the most popular framework that tries to solve this problem. Another framework that I found recently is Kundera but I haven't used it so far.
So answering your question - you do not need to keep your SQL queries as system variables. All you need to do is to use proper abstractions in your language of choice.
In my opinion, it would be better to use something like Flyway or Liquibase, which are integrated really well in Spring Boot. You can find more information here.
I prefer Liquibase, since it uses a higher level format to describe your database migrations, allowing you to switch databases quite easily. This way, you can also use different databases per environment, for example:
HSQLDB during local development
MySQL in DEV and TEST
Oracle in Production
It's also possible to export your current database schema from an existing database to have an initial version in Flyway or Liquibase, this will give you a good baseline for your scripts.
Related
As the application gets complicated, one thing that change a lot is the queries, especially if they are complex queries. Wouldn't it be easier to maintain the queries in the db rather then the resources location inside the package, so that it can be enhanced easily without a code change. What are the drawbacks of this?
You can use stores procedures, to save your queries in the database. Than your Java code can just call the procedure from the database instead of building a complex query.
See wikipedia for a more detailed explanation about stored procedures:
https://en.wikipedia.org/wiki/Stored_procedure
You can find details about the implementation and usage in the documentation of your database system (MySql, MariaDb, Oracle...)
When you decide to move logic to the database, you should use a version control system for databases like liquibase: https://www.liquibase.org/get-started/quickstart
You can write the changes to you database code in xml, json or even yaml and check that in in your version control system (svn, git...). This way you have a history of the changes and can roll back to a previous version of your procedure, if something goes wrong.
You also asked, why some people use stored procedures and others keep their queries in the code.
Stored procedures can encapsulate the query and provide an interface to the data. They can be faster than queries. That is good.
But there are also problems
you distribute the buisiness logic of your application to the database and the programm code. It can realy be troublesome, if the logic is spread through all technical layers of your applicaton.
it is not so simple anymore to switch from a Oracle database to a MariaDb, if you use specific features of the database system. You have to migrate or rewrite the procedures.
you have to integrate liquibase or another system into you build pipeline, to keep track of you database changes.
So it depends on the project and it's size, if either of the solutions is better.
We have a web-based application in dev phase where we use Spring 5, JPA(Hibernate) and Postgresql 9.4
Till this moment we were using one instance of the posgresql db for our work. Basically, we don't have any schema generation script and we simply were updating the db if we needed some new table, column etc. For the Hibernate we were generating classes from the db.
Now when we have some amount of test data and each change in the db brings a lot of trouble and confusion. We realized that we need to create and start maintaining some schema generation file along with some scripts which generate test data.
After some research, we see two options
Create two *.sql files. The first will contain the schema generation script the second one SQL to create test data. Then add a small module with a class which will execute the *.sql files using plain jdbc. Basically, we will continue developing and whenever we made some changes we quickly wipe->create->populate the db. This approach looks the most appealing to us at this point. It quick, simple, robust.
Second is to set up some tool which may help with that e.g. Liquibase
This approach also looks good in terms of versioning support and other capabilities. However, we are not in production yet, we are in an active development phase. We don't have much of the devs who do the db changes and we are not sure how frequently we will update the db schema in production, it could be rare.
The question is the following. Would the first approach be a bad practice and applying the second one will give the most benefits and it worth to use it?
Would appreciate any comments or any other suggestions!
First approach is NOT a bad practice, until this generation. But it will be considering the growth of tools like Liquibase.
If you are in the early or middle of the Development Phase, go ahead with LiquiBase, along with Spring Data. Contrarily, in the closing stages of the Development Phase, Think you real need for it.
I would suggest second approach as it will automatically find the new script as you add and execute the script on startup. Moreover, when you have tools available like liquibase and flyway why reinvent the wheel ?.
2nd approach will also reduce the un-necessary code for manually executing the *.sql files. Moreover this code also needs testing and if updated can be error prone.
Moreover 1st approach where you write manual code to execute script also has to check which scripts needs to be executed.. If you already has existing database and you are adding some new scripts you need to execute those new scripts only. These things are taken care of automatically with 2nd approach and you don't need to worry about already executed script being executed again
Hope this answers your concern. Happy coding
I have a JDBC application that uses Apache Derby. How can I migrate my entire database system to use MySQL?
I have 3 Java programs that access the database
I have 3 tables and 2 views
I am using Netbeans. I have never used MySQL before and do not know where to begin. Is there nice integration with Java and MySQL in Netbeans? How can I get nice integration with NetBeans and MySQL?
All help is greatly appreciated!
Looks like this plugin would probably help you:
http://netbeans.org/kb/docs/ide/mysql.html
I found this tutorial on the Spring site, but I think it is only a partial solution.
Tutorial
In it they are relying on hibernate to drop and create the tables, and I really don't like that. You have to go through special coding to add static data. For example, if your app is tracking devices, you probably want a table of device_types. At least some o those device types will be in the db, as well as devices, users, etc.
What I intend to do, is to use Derby until I am somewhat stable. From it, I will get the database schema and create it in mysql. It seems that the DB look utility can be used for that. DB Look
As added security I intend to run my web app with a db user that does not have the ability to add or drop tables. Also it is possible to remove the permission to delete rows if you use the concept of making rows "inactive" So instead of deleting a no longer used device type, you set the "active" flag to F. So your device type query would look like:
select * from device_type where active = 'T'
Im currently working my way towards JPA 2.0 and I start of liking how easy it is to maintain persistent data.
What I'm currently trying to accomplish is using JPA in a basic desktop application. The application should allow me to open embedded databases which are on my file system. I chose H2 databases for now, but I can really live switching to JavaDB or anything else.
What Im trying to accomplish is, that one can open the database file without previously define a persistence-unit in the persistence.xml file.
I can easily define a unit and persist objects, but it needs to be configured first.
I want to write some sort of database browser which allows opening without preconfiguration and recompiling.
http://www.objectdb.com/java/jpa/start/connection
I saw that ObjectDB allows access for this type of PersistenceFactory creation, but I was not able to transfer this example to other databases.
Am I totally wrong with the way I approach this probblem? Is JPA not designed with on-the-fly database access?
Thank you for your help,
Johannes
Not part of the JPA standard. Some implementations may offer their own API to do it. For example with DataNucleus if you go to this page http://www.datanucleus.org/products/accessplatform_3_0/jpa/persistence_unit.html at the end you can create dynamic persistence-units (and hence EMFs), and that implementation obviously allows persistence to the widest range of datastores you'll get anywhere
You can pass a Map of properties to createEntityManagerFactory() call that defines the database connection info, etc. The property names are the same as in the persistence.xml. I assume most JPA providers support this, EclipseLink does.
You will still need to define the set of classes for the database and map them.
If you do not have any classes either, than you could look into EclipseLink's dynamic support,
http://wiki.eclipse.org/EclipseLink/Examples/JPA/Dynamic
If you want to make a database browser accessing different databases, you can't use a PU/Entity Manager (imo).
You'll need a dialogue asking a user for the IP/Port of the database, the username/password, the database name to access, and the type of database.
Then all you need to do is create a socket, send requests over the socket, and parse the response into a view.
Since both the request and the response are database specific, the user has to select the proper database driver.
Since I'm not really proficient with databases, some details may be irrlevant, but I'll include everything:
As part of a project in my University, we're creating a website that uses JSP, servlets and uses a MySQL server as backend.
I'm in charge of setting up the tables on the DB, and creating the Java classes to interact with it. However, we can only connect to the MySQL server from inside the University, while we all (7 people) work mostly at home.
I'm creating an interface QueryHandler which has a method that takes a string (representing a query) and returns ResultSet. My question is this: How do I create a class that implements this interface which will simulate a database and allow others to use different DBHandlers and not know the difference and allow me to test different queries without connecting to the actual MySQL database?
EDIT: I'm not so sure on the differences between SQL databases, but obviously all the queries I run on MySQL should run on the mock.
Why not just install your own MySQL database for testing? It runs on Windows, Mac and Linux, and it's not too resource heavy. I have it installed on my laptop for local testing.
Your API appears to be flawed. You should not be returning ResultSets to clients. By doing so, you are forever forcing your clients to rely on a relational database backend. Your data access layer needs to hide all of the details of how your data is actually structured and stored.
Instead of returning a ResultSet, consider returning a List or allowing the client to supply a Stream that your data access component can write to.
This will make unit tests trivial for the clients of the API and will allow you to swap storage mechanisms at will.
Try derby. It's a free server you can use to test against, if you don't mind having to change drivers when you go back to SqlServer. You might be limited in the kind of queries you can run though. I'm not sure if SqlServer has any special syntax outside of standard SQL.
How about using a HSQLDB for offline tests? It wont behave exactly like a MySQL DB but is a fast in memory SQL DB that should satisfy most of your needs.
The best way in my experience is multiple database instances and or schemas. Normally you'd have one for each user to do their development against/sanity checking the running application, one for an automated build for running unit tests and ideally one for each user to run their unit tests against. And of course instances/schemas for demos, integration testing. Apart from the practial side, being able to do this ensures deploying/upgrading the app/database will be pretty near faultless too.
Assuming you have a DAO layer, the only code that needs access to a real database at the unit test level is the DAO implementation, the business layer should be using a mock DAO implementation.