I know there are similar questions but I want to ask for advice for my particular case. I have a web application that use jdbc. This application, at startup, is creating a Map of connections. When a db change need to be done a connection from this map(there are multiple connection for more db's and reading and writing are done on the same db for every one of them) is send in DAO class using the constructor of that class and using that connection the query is executed.
The thing is that now a change need to be done: reading and writing will be done on two different db's and the code need to be changed in order for this to work. I need to know what is the best approach to make the change.
Related
I'm building an application using Java and Spring Boot where I want to query two foreign databases (they might have different schemas and data) every time I run. Therefore I'd like to query two different databases every time. After accessing those databases, I would then like to store the result (my business logic) on a local static database.
I originally wanted to store all the database data (user, pass, url) in the application.properties, but then realized that this might not be best practice as the details for the two DBs I'm querying will be received as input from the user. Therefore, I'm not sure if it's the best idea to update and overwrite application.properties every time I receive a new request (please let me know if there's a better way to do this.
Assuming I have the DBs info in application.properties, I've followed multiple tutorials for multiple DB connections in Spring, and they all followed something along the lines of making configuration files for each DB, calling a repository/DAO file for each DB, which references a model of said DB. That seems a bit problematic for me as I don't know the schema of the databases before hand, so I can't define a model class. And even if I did, this will probably change across databases, so I'm really not sure what to do.
Is there a more flexible/versatile way to query "foreign" databases with Spring or old school Java given that I don't know what their schemas might look like?
Any help is greatly appreciated!
Multiple databases config have to be maintained in application.properties or config class as a best practice. Refer here - https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#howto-two-datasources
You can have a POJO with DB properties which gets assigned from user provided values. Use that POJO in a DB config class to connect to different databases.
Not knowing schema is not a problem as you can handle data with java collections.
I have two data bases for one system. One is OpenLDAP and another one is MongoDB. To be specific this OpenLDAP is used by Atlassian Crowd that is used by us. I need to synchronize users in these two databases. That is,
If I create a user it will be defaultly created in the OpenLDAP and it has to be created in the MongoDB as well.
In past there were issues in handling this and there may be users who are in OpenLDAP but not in MongoDB. I need to find these users also.
If I delete or update a user from one I need the delete or operation to happen in both DBs.
I am going to have a cache copy of LDAP using Redis. What is the best way to synchronize data between these two databases to match the above expectations?
If it helps I am using Java in backend.
2 possible ways:
(Preferred) Design your code in a way you can "plug" database operators to handle the different databases, so you access them from a facade code that lets you access it without worriying the underlaying databases. , so creating an user, for example, would be something like this:
createUser() -> foreach dbhandle do dbhandle->createUser() forend
The same applies to delete or update any data. This approache should also solve the problem 2.
You can just update one database and have a script that runs in background updating the databases. This approach will let you work just with 1 database, letting the script handle the rest of the databases, but it is way more expensive and less reliable (as you might access 1 database that has not been updated from the master database yet)
I am brand new to the concept of embedded databases and have chosen HSQLDB to be the embedded DB for my Java app. I think I am fundamentally not understanding something: nowhere do I see how/where to:
Define username/password credentials that must be used for connecting to a database
Creating a new database (e..g, db_myapp)
Creating tables for that new database
With a non-embedded ("normal") DB, I would first use a DB client to connect to the database, and CREATE the db_myapp DB as well as any tables it should have. My app would then expect those tables to exist at runtime.
But with HSQLDB, I have no such DB server to connect to, so I don't see how/where I can create these databases/tables/credentials ahead of time, before my app runs.
And maybe that's exactly what an "embedded" DB does; perhaps its an entire DB embedded inside a JDBC driver? In any event, I still need a way to accomplish the 3 things listed above.
The only thing I can think of is to run some initialization code every time that my app starts up. This code would check for the existence of these constructs, and if they don't exist, then it would create them.
There are several problems here:
This approach might work with databases and tables, but not the credentials I need on the JDBC Connection itself. How/where do I create those?
I'm not even sure if this is the right/normal approach to using an embedded HSQLDB; can someone confirm I'm on track (that is, the "check-to-see-if-it-exists-and-if-not-then-create" approach)?
What happens if I accidentally execute code that tries creating a new database/table eve when it already exists? Will HSQLDB just ignore it or will it blow out my existing DB/tables?
The short answer is that you're pretty much on the right track.
Connecting to the embedded database is really no different from connecting to a normal db server, except that the connection string is a bit different. This section has information on that. The thing is that you don't really have separate 'databases' to choose from, it's just specified in the connection string. For the connection:
Connection c = DriverManager.getConnection("jdbc:hsqldb:file:/opt/db/testdb", "SA", "");
This will give you a connection to an embedded database engine that persists the data in the file at /opt/db/testdb. The default username for an embedded database will always be 'SA' with no password. I honestly don't know if it'll work, but if you really need to set a different password, you can try executing ALTER USER SA SET PASSWORD <newPassword>. It'll probably work...
As far as creating tables and such, there's a couple of way of going about this, depending on whether the database will be persisted as a File or in memory. Often times, embedded dbs get used for pretty simple data, and so the tables get created by executing a statement right after initializing the connection. CREATE TABLE IF NOT EXISTS ... is the usual way of doing things. This allows you to create a table only if it doesn't already exist.
If you're working with a file-base database, then hsqldb gives you another option. Take a look at this documentation about accessing a database using their tools. This would allow you to create a file-base database ahead of time, and set things like username/password and setup all your tables. Then you can just copy over the resultant file to be used by your application. Then everything would be setup before your application connects to it.
So ultimately, you have the option to go either way. You can either have your application set everything up when the connection is initialized, or you can set it up manually ahead of time. My preference is to have the application set it up in code simply because then your table definitions are kept closer to the code that actually uses them. I haven't used an embedded database like that for really complex data, though, so I can't honestly say how well that scales.
Since I'm not really proficient with databases, some details may be irrlevant, but I'll include everything:
As part of a project in my University, we're creating a website that uses JSP, servlets and uses a MySQL server as backend.
I'm in charge of setting up the tables on the DB, and creating the Java classes to interact with it. However, we can only connect to the MySQL server from inside the University, while we all (7 people) work mostly at home.
I'm creating an interface QueryHandler which has a method that takes a string (representing a query) and returns ResultSet. My question is this: How do I create a class that implements this interface which will simulate a database and allow others to use different DBHandlers and not know the difference and allow me to test different queries without connecting to the actual MySQL database?
EDIT: I'm not so sure on the differences between SQL databases, but obviously all the queries I run on MySQL should run on the mock.
Why not just install your own MySQL database for testing? It runs on Windows, Mac and Linux, and it's not too resource heavy. I have it installed on my laptop for local testing.
Your API appears to be flawed. You should not be returning ResultSets to clients. By doing so, you are forever forcing your clients to rely on a relational database backend. Your data access layer needs to hide all of the details of how your data is actually structured and stored.
Instead of returning a ResultSet, consider returning a List or allowing the client to supply a Stream that your data access component can write to.
This will make unit tests trivial for the clients of the API and will allow you to swap storage mechanisms at will.
Try derby. It's a free server you can use to test against, if you don't mind having to change drivers when you go back to SqlServer. You might be limited in the kind of queries you can run though. I'm not sure if SqlServer has any special syntax outside of standard SQL.
How about using a HSQLDB for offline tests? It wont behave exactly like a MySQL DB but is a fast in memory SQL DB that should satisfy most of your needs.
The best way in my experience is multiple database instances and or schemas. Normally you'd have one for each user to do their development against/sanity checking the running application, one for an automated build for running unit tests and ideally one for each user to run their unit tests against. And of course instances/schemas for demos, integration testing. Apart from the practial side, being able to do this ensures deploying/upgrading the app/database will be pretty near faultless too.
Assuming you have a DAO layer, the only code that needs access to a real database at the unit test level is the DAO implementation, the business layer should be using a mock DAO implementation.
I am not very familiar with databases and what they offer outside of the CRUD operations.
My research has led me to triggers. Basically it looks like triggers offer this type of functionality:
(from Wikipedia)
There are typically three triggering events that cause triggers to "fire":
INSERT event (as a new record is being inserted into the database).
UPDATE event (as a record is being changed).
DELETE event (as a record is being deleted).
My question is: is there some way I can be notified in Java (preferably including the data that changed) by the database when a record is Updated/Deleted/Inserted using some sort of trigger semantics?
What might be some alternate solutions to this problem? How can I listen to database events?
The main reason I want to do this is a scenario like this:
I have 5 client applications all in different processes/existing across different PCs. They all share a common database (Postgres in this case).
Lets say one client changes a record in the DB that all 5 of the clients are "interested" in. I am trying to think of ways for the clients to be "notified" of the change (preferably with the affected data attached) instead of them querying for the data at some interval.
Using Oracle you can setup a Trigger on a table and then have the trigger send a JMS message. Oracle has two different JMS implementations. You can then have a process that will 'listen' for the message using the JDBC Driver. I have used this method to push changes out to my application vs. polling.
If you are using a Java database (H2) you have additional options. In my current application (SIEM) I have triggers in H2 that publish change events using JMX.
Don't mix up the database (which contains the data), and events on that data.
Triggers are one way, but normally you will have a persistence layer in your application. This layer can choose to fire off events when certain things happen - say to a JMS topic.
Triggers are a last ditch thing, as you're operating on relational items then, rather than "events" on the data. (For example, an "update", could in reality map to a "company changed legal name" event) If you rely on the db, you'll have to map the inserts & updates back to real life events.... which you already knew about!
You can then layer other stuff on top of these notifications - like event stream processing - to find events that others are interested in.
James
Hmm. So you're using PostgreSQL and you want to "listen" for events and be "notified" when they occur?
http://www.postgresql.org/docs/8.3/static/sql-listen.html
http://www.postgresql.org/docs/8.3/static/sql-notify.html
Hope this helps!
Calling external processes from the database is very vendor specific.
Just off the top of my head:
SQLServer can call CLR programs from
triggers,
postgresql can call arbitrary C
functions loaded dynamically,
MySQL can call arbitrary C functions,
but they must be compiled in,
Sybase can make system calls if set
up to do so.
The simplest thing to do is to have the insert/update/delete triggers make an entry in some log table, and have your java program monitor that table. Good columns to have in your log table would be things like EVENT_CODE, LOG_DATETIME, and LOG_MSG.
Unless you require very high performance or need to handle 100Ks of records, that is probably sufficient.
I think you're confusing two things. They are both highly db vendor specific.
The first I shall call "triggers". I am sure there is at least one DB vendor who thinks triggers are different than this, but bear with me. A trigger is a server-side piece of code that can be attached to table. For instance, you could run a PSQL stored procedure on every update in table X. Some databases allow you to write these in real programming languages, others only in their variant of SQL. Triggers are typically reasonably fast and scalable.
The other I shall call "events". These are triggers that fire in the database that allow you to define an event handler in your client program. IE, any time there are updates to the clients database, fire updateClientsList in your program. For instance, using python and firebird see http://www.firebirdsql.org/devel/python/docs/3.3.0/beyond-python-db-api.html#database-event-notification
I believe the previous suggestion to use a monitor is an equivalent way to implement this using some other database. Maybe oracle? MSSQL Notification services, mentioned in another answer is another implementation of this as well.
I would go so far as to say you'd better REALLY know why you want the database to notify your client program, otherwise you should stick with server side triggers.
What you're asking completely depends on both the database you're using and the framework you're using to communicate with your database.
If you're using something like Hibernate as your persistence layer, it has a set of listeners and interceptors that you can use to monitor records going in and out of the database.
There are a few different techniques here depending on the database you're using. One idea is to poll the database (which I'm sure you're trying to avoid). Basically you could check for changes every so often.
Another solution (if you're using SQL Server 2005) is to use Notification Services, although this techonology is supposedly being replaced in SQL 2008 (we haven't seen a pure replacement yet, but Microsoft has talked about it publicly).
This is usually what the standard client/server application is for. If all inserts/updates/deletes go through the server application, which then modifies the database, then client applications can find out much easier what changes were made.
If you are using postgresql it has capability to listen notifications from JDBC client.
I would suggest using a timestamp column, last updated, together with possibly the user updating the record, and then let the clients check their local record timestamp against that of the persisted record.
The added complexity of adding a callback/trigger functionality is just not worth it in my opinion, unless supported by the database backend and the client library used, like for instance the notification services offered for SQL Server 2005 used together with ADO.NET.