If I have a setup where I need to run some SQL across several different database environments from a single Java program, is there a way to configure a connection pool to achieve this?
When I say several different database environments, what I mean is that I have several different versions of the same environment (staging, development, unit test, pre-prod etc). I want to create something that can run the same SQL query across each environment against the script logging table to ensure that each environment has had the same scripts run against them. We've had issues where the environments have got out of sync and bad things happened. While we are improving the process to try and reduce this, a tool is also required so we can check that what has been actually run.
Currently we have a pool property that passes in the URL that points to each environments connection.properties files. This is fine for the current connection pooling, but not sure if this will work for multiple databases.
If you need to connect to many different databases at the same time then you should use a different connection pool for each database. It just doesn't make any sense to use the same pool for different databases as a connection to one database will never be able to be re-used for another database.
If you need to connect to a different database depending on external configuration (such as command-line, properties file) then you should arrange for the differing connection strings to be used depending on the environment setting.
What is your application server? Best solution is to "hardcode" some symbolic connection pool name into the application. Then you can use the same signed version of .jar/.war/.ear file in every environment.
To name the real database you can either use JNDI mapping on application server level. Or you can also use tnsnames.ora/sqlnet.ora(default domain) mapping as this is usual way how to manage this in Oracle world.
Related
I couldn't find an answer to this question. How can I export a Java project that makes use of a PostgreSQL database?
I want to use the same database on another computer. Do I need to export the database itself with the project? How can this be done?
What should the connection URL be, so that the database is accessible on another computer?
I'm using JDBC, and I'm on Windows.
Thanks in advance.
EDIT: Wouldn't I also need to dynamically retrieve the username and password on the other computer, instead of using the specific username and password I have on my computer in PostgreSQL?
It really depends on what you want to achieve.
Shared database between hosts
Do you want the application on both computers to use the same database, so that changes made by one are seen on the other? If so, you need to configure each copy of the application to connect to the same database instance on one of the machines. This is usually done by changing the JDBC URL. You'll need to configure PostgreSQL on the machine that'll be the database server so it allows connections from the other hosts, ensure they can talk to each other over TCP/IP, etc.
Fresh DB on each host
Do you want each install to have a separate instance of the database, so changes made on one have no effect on the other, and where each instance starts out with a blank, empty database (or one with only static contents like lookup tables)? If so, you should generally define the database using SQL scripts, then have the application run the SQL scripts when first installed on a machine. If you've defined the database by hand so far, you can use pg_dump to create a SQL script that you can use as the basis for this, but I really advise you to look into tools like Liquibase for schema management instead.
"Fork" current state
Do you want each instance of the application on a machine to have an independent database, so changes made on one have no effect on other instances on other machines, but where the starting state of an install is what was in the database on the other host? If so, you need to dump and reload the database alongside the application, using pg_dump -Fc and pg_restore. You can automate this within your application / build system using tools like ProcessBuilder, or do it manually.
There's no generic, canned way to do this. It'll require you to define an application deployment procedure, maybe produce an installer, etc.
I need to dynamically update property values in different applications residing on different hosts (one host could have multiple different applications deployed). Different application may have different properties to update.
I am thinking about having each application open a thread just to act as a local server. Then use a centralized application to push out updates through sock connection. But is this too complicated? What if the connection is lost, recovery would be a problem.
Since this kind of problem is not really uncommon, I wonder if anyone knows any existing framework or tools to accomplish it in an easier and cleaner way. Of course, it has to work with tomcat.
i have a java project with mysql database
i am using advance installer to create a setup file...
i can embed jre to run the software(Without installing java in the system).
like wise,i want to embed the mysql database (system doesn't contains mysql )...
.There is any software to embed mysql database in my project setup...
MySQL is very difficult to embed correctly and there are a number of failure states that might occur if it is not shut down using the proper procedure. SQLite is a much better engine for this sort of thing and is used by a number of applications as a persistent backing store. While not as powerful as MySQL, it is much more resilient. It also has the advantage of not requiring a separate process.
SQLite's storage method is to persist things into a file that can be copied, moved, or backed-up without any issues. MySQL involves many such files, some of which are in an inconsistent state unless the correct FLUSH is called.
The best you can do with MySQL is bundle it, not embed it, but then you'll be responsible for setting it up on the host system, configuring it correctly, running the appropriate maintenance procedures, and providing some kind of back-up facility for the database itself.
Does the term 'embedded database' carry different meaning from 'database'?
There are two definitions of embedded databases I've seen:
Embedded database as in a database system particularly designed for the "embedded" space (mobile devices and so on.) This means they perform reasonably in tight environments (memory/CPU wise.)
Embedded database as in databases that do not need a server, and are embedded in an application (like SQLite.) This means everything is managed by the application.
I've personally never seen the term used exactly as Wikipedia defines it, but that's probably my fault, although it resembles quite a bit my number 2 above.
The word 'embedded' does add meaning, basically that the database is dedicated to a specific application rather than shared among multiple applications, to a degree hidden from the user of the application, and completely controlled by the application.
An embedded database is conceptually just a part of the application rather than a separate thing.
Just see the usage of ... for example a H2-embedded database. You don't need a server running on your machine, your whole database ist stored in one (these are originally two) local file. It is opened and locked when you connect to your DB, and it is unlocked when you disconnect.
When a developer embeds a database library inside an application and there is no need for administrator, it is called embedded database. Database is hidden, but data management via SQL (e.g. ITTIA DB SQL) or no SQL (e.g. Berkeley DB) is accessible through APIs. Embedded databases are common for web development or device applications.
Since I'm not really proficient with databases, some details may be irrlevant, but I'll include everything:
As part of a project in my University, we're creating a website that uses JSP, servlets and uses a MySQL server as backend.
I'm in charge of setting up the tables on the DB, and creating the Java classes to interact with it. However, we can only connect to the MySQL server from inside the University, while we all (7 people) work mostly at home.
I'm creating an interface QueryHandler which has a method that takes a string (representing a query) and returns ResultSet. My question is this: How do I create a class that implements this interface which will simulate a database and allow others to use different DBHandlers and not know the difference and allow me to test different queries without connecting to the actual MySQL database?
EDIT: I'm not so sure on the differences between SQL databases, but obviously all the queries I run on MySQL should run on the mock.
Why not just install your own MySQL database for testing? It runs on Windows, Mac and Linux, and it's not too resource heavy. I have it installed on my laptop for local testing.
Your API appears to be flawed. You should not be returning ResultSets to clients. By doing so, you are forever forcing your clients to rely on a relational database backend. Your data access layer needs to hide all of the details of how your data is actually structured and stored.
Instead of returning a ResultSet, consider returning a List or allowing the client to supply a Stream that your data access component can write to.
This will make unit tests trivial for the clients of the API and will allow you to swap storage mechanisms at will.
Try derby. It's a free server you can use to test against, if you don't mind having to change drivers when you go back to SqlServer. You might be limited in the kind of queries you can run though. I'm not sure if SqlServer has any special syntax outside of standard SQL.
How about using a HSQLDB for offline tests? It wont behave exactly like a MySQL DB but is a fast in memory SQL DB that should satisfy most of your needs.
The best way in my experience is multiple database instances and or schemas. Normally you'd have one for each user to do their development against/sanity checking the running application, one for an automated build for running unit tests and ideally one for each user to run their unit tests against. And of course instances/schemas for demos, integration testing. Apart from the practial side, being able to do this ensures deploying/upgrading the app/database will be pretty near faultless too.
Assuming you have a DAO layer, the only code that needs access to a real database at the unit test level is the DAO implementation, the business layer should be using a mock DAO implementation.