How do you insert with default value in spring data JDBC - java

CrudRepository#save doesn't allow you to use default columns. null-fields of an entity are interpreted as NULL not DEFAULT.
If I use a custom #Query("INSERT INTO ... DEFAULT ..."), then I'm unable to obtain the ID of the inserted row.

There is currently no build in way of using the default values from the database.
While #Jay's answer isn't aimed at Spring Data JDBC, the approach of setting the attributes to their default value in the constructor does work with Spring Data JDBC.
The alternative would be to implement a custom method which does the insert and retrieves the default values back from the database.
AFAIK not all databases support more then one return value from an insert so you might have to actually reselect the data written to the database.

Related

Java JDBC Retrieve values from DEFAULTs after an insert

Does anyone know of a standard way to retrieve value defined with Defaults in a database when you insert?
These are not primary keys but other columns, the getGeneratedKeys method only returns for auto-increment, but I have other defaults like LastUpdate (date) or CreatedOn (date).
I realize that some databases like MSSQL have an output option or Oracle Return option, but I'm looking for a common way to do it.
Use the generated key so you can then follow up with a SELECT allTheFieldsYouCareAbout FROM tableYouJustAddedSomethingTo WHERE unid = generatedKeyYouJustGot.
Yeah, that's annoying and somewhat dubious from a performance perspective (the primary key is doubtlessly indexed, so not too pricey, but it's still another back-and-forth over TCP or whatever pipe you're using to talk to your database).
It's also the only way that reliably works on all major JDBC drivers.

How to store date as Oracle SYSDATE with JpaRepository save method?

I am working on a Spring Data JPA project, where I want to store Oracle SYSDATE in a date field of a table. I can not modify the table at all.
Right now, what I am doing is passing new Date() to that date field. Which is not correct as the Oracle server is in a different timezone.
I am not writing any query to insert the data, instead I am using JpaRepository save() method.
How can I do this?
P.S. I do not want to hard code the timezone of the database server in my code.
There is no direct way to do this (see Setting default values for columns in JPA).
What you could do is to perform a select SYSDATE from dual and use the result to set your property.
The method to get the sysdate could be in your Spring Data Repository
#Query(value=`select SYSDATE from dual`, nativeQuery=true)
Date currentDate();
You could set the value in a #PrePersist Listener (see onSave() (for any Entity saved with Hibernate/Spring Data Repositories) ).
But I think you can't perform queries in those listeners, so the next thing would be to create a custom implementation of Spring Data's save method, so that it gets such a value and keeps it available for the Listener, before actually saving anything. Alternatively one could use a separate connection of the query.
Obviously, this all adds another database roundtrip, which is rather expensive.
An alternative would be to get the current time of the server once and use that just to determine the correct offset to use and create the timestamps locally, using that offset. This is much faster and easier but breaks when application server and database server have different daylight saving time rules.

Using inet postgres datatype with OpenJPA

My application is using OpenJPA to connect with a Postgres database. In the schema I am using the inet postgres datatype in a column. This field in Java is a String. I am able to read the field correctly, but I am having problems inserting a new row.
Searching on the Internet I have found three possible solutions to do this:
Creating a Native query. This method works, but in my specific case creating a Native query in order to insert this row implies creating more queries that were being managed by OpenJPA which can lead to lots of bugs. So it is not the more suitable solution in this case.
Creating a PostgresDictionary like in this question: How to use Postgres inet data type with OpenJPA?. I have implemented this exactly how this user explains. I have created the custom PostgresDictionary, I have added the columnDefinition in the #Column annotation and I have added the property in the persistence.xml. But my custom PostgresDictionary is never called.
When the application created the PostgresDictionary keeps creating the org.apache.openjpa.jdbc.sql.PostgresDictionary instead of the custom one.
Implementing a custom Strategy, like this example http://webspherepersistence.blogspot.co.at/2009/04/custom-orm-with-openjpa.html. But in order to implement the Strategy, I have to set the type of the column from the class java.sql.Types (http://docs.oracle.com/javase/6/docs/api/java/sql/Types.html?is-external=true) and there is no inet type in this column. I tried Types.OTHER, but I still have the same error indicating that the column is a type inet and the value I am trying to insert is varchar (String).
So, does anybody has an idea how to fix the problem I am having with the mapping?
The solution in the point 2 was not working because the openjpa.jdbc.DBDictionary was been overiden by the class org.springframework.orm.jpa.vendor.OpenJpaVendorAdapter that had the database property set to POSTGRESQL. Which aparently set the DBDictionary to org.apache.openjpa.jdbc.sql.PostgresDictionary independently of the value set in the persistence.xml property.
Deleting this database property from the OpenJpaVendorAdapter allowed me to use my custom PostgresDictionary.

Join Postgresql rows with Mongodb documents based on specific columns

I'm using MongoDB and PostgreSQL in my application. The need of using MongoDB is we might have any number of new fields that would get inserted for which we'll store data in MongoDB.
We are storing our fixed field values in PostgreSQL and custom field values in MongoDB.
E.g.
**Employee Table (RDBMS):**
id Name Salary
1 Krish 40000
**Employee Collection (MongoDB):**
{
<some autogenerated id of mongodb>
instanceId: 1 (The id of SQL: MANUALLY ASSIGNED),
employeeCode: A001
}
We get the records from SQL, and from their ids, we fetch related records from MongoDB. Then map the result to get the values of new fields and send on UI.
Now I'm searching for some optimized solution to get the MongoDB results in PostgreSQL POJO / Model so I don't have to fetch the data manually from MongoDB by passing ids of SQL and then mapping them again.
Is there any way through which I can connect MongoDB with PostgreSQL through columns (Here Id of RDBMS and instanceId of MongoDB) so that with one fetch, I can get related Mongo result too. Any kind of return type is acceptable but I need all of them at one call.
I'm using Hibernate and Spring in my application.
Using Spring Data might be the best solution for your use case, since it supports both:
JPA
MongoDB
You can still get all data in one request but that doesn't mean you have to use a single DB call. You can have one service call which spans to twp database calls. Because the PostgreSQL row is probably the primary entity, I advise you to share the PostgreSQL primary key with MongoDB too.
There's no need to have separate IDs. This way you can simply fetch the SQL and the Mongo document by the same ID. Sharing the same ID can give you the advantage of processing those requests concurrently and merging the result prior to returning from the service call. So the service method duration will not take the sum of the two Repositories calls, being the max of these to calls.
Astonishingly, yes, you potentially can. There's a foreign data wrapper named mongo_fdw that allows PostgreSQL to query MongoDB. I haven't used it and have no opinion as to its performance, utility or quality.
I would be very surprised if you could effectively use this via Hibernate, unless you can convince Hibernate that the FDW mapped "tables" are just views. You might have more luck with EclipseLink and their "NoSQL" support if you want to do it at the Java level.
Separately, this sounds like a monstrosity of a design. There are many sane ways to do what you want within a decent RDBMS, without going for a hybrid database platform. There's a time and a place for hybrid, but I really doubt your situation justifies the complexity.
Just use PostgreSQL's json / jsonb support to support dynamic mappings. Or use traditional options like storing json as text fields, storing XML, or even EAV mapping. Don't build a rube goldberg machine.

Check for column's character set and collation in JDBC

Is there a way to check if a certain MySQL column has a specific character set and collation with JDBC?
For those who need some background informations: The application I am working has changed its database layout with new versions. The update mechanisms was implemented rather basic: during startup, the application checks if the change is already there and, if not, alters the table accordingly. Right now I need to change an existing column to be unique and case sensitive (which means, I need to change the column's character-set and collation accordingly).
You will have to query it from INFORMATION_SCHEMA.COLUMNS. The CHARACTER_SET_NAME and COLLATION_NAME fields are what you need.
There is nothing in the JDBC spec that provides access to this.

Categories

Resources