How to programmatically create a MySQL index with limited length using JPA? - java

Similar to here I'm annotating my class with
#Table(indexes = {#Index(columnList = "name")})
which attempts to create a non-unique index with the maximum length of the varchar column. Unfortunately that's not possible because it's a varchar(255) column of type utf8mb4. phpMyAdmin added KEY '...' (name(191)) by clicking on the respective buttons in the UI, so at least my software runs efficient queries now.
Now I was wondering if it's possible to have my Java class auto-generate the index with limited length upon creating the database schema? The code builds on spring-boot-starter-data-jpa:1.4.2.RELEASE.

There are other answers than trying to get the 3rd party software to do something it may or may not allow for.
Live with 191 limitation on the column size. Or, do you really have a max between 191 and 255.
Change to utf8 (from utf8mb4). And lose the ability to store Emoji and some Chinese characters.
There is a clumsy process in 5.6 to raise the 767 limit you are bumping into.
Upgrade to 5.7, which makes virtually eliminates the problem.

You should only use the JPA generated table scripts as a starting point, and you should never use JPA to create you tables in production.
If you have "create table" privileges, so you don't need a DBA to create and modify the database, then I recommend that you use Flyway to manage database creation and migration. If you need to be database agnostic, and like long XML files, you can also use LiquideBase.
With flyway, you would add a new script every time you add one or more entities. I typically let JPA create the script, and then copy what I need, and maybe do some modifications - for instance varchar(255) means 255 bytes on some databases, so you may want modify that if you are storing something other than Latin-1.
Flyway is very simple to use, and it is fully integrated into Spring boot, so you just add the unique index the way you want it in the first (or later) flyway script src/main/resources/db/migration/V1__initial_script.sql.

Related

Java JDBC Retrieve values from DEFAULTs after an insert

Does anyone know of a standard way to retrieve value defined with Defaults in a database when you insert?
These are not primary keys but other columns, the getGeneratedKeys method only returns for auto-increment, but I have other defaults like LastUpdate (date) or CreatedOn (date).
I realize that some databases like MSSQL have an output option or Oracle Return option, but I'm looking for a common way to do it.
Use the generated key so you can then follow up with a SELECT allTheFieldsYouCareAbout FROM tableYouJustAddedSomethingTo WHERE unid = generatedKeyYouJustGot.
Yeah, that's annoying and somewhat dubious from a performance perspective (the primary key is doubtlessly indexed, so not too pricey, but it's still another back-and-forth over TCP or whatever pipe you're using to talk to your database).
It's also the only way that reliably works on all major JDBC drivers.

How to update Hibernate applications in production the right way?

I read the discussion about using hbm2ddl.auto=update in order to auto-update changes to the database schema.
The thread is from 2008 and I do not know how secure it is to use the auto-update mode today.
We are running a small JavaEE on a Glassfish with Hibernate 4.3.11 and PostgreSQL. We plan to use continious integration with Jenkins.
Is it useful to work with hbm2ddl.auto=update enabled? Or is it better to use an easy alternative to update/check the updates maybe manually?
I know it is hard to give a blanket statement.
You should not use hbm2ddl.auto=update to update production databases.
Few reasons:
Hibernate will only INSERT missing columns and not modify existing columns. Therefore, if you rename a property (Client to Customer), Hibernate will create a new column Customer, leaving the column Client untouched. You will need to manually "move" the data there and remove the orphan column.
Hibernate will not remove constraints on no longer mapped columns. Thus, if your Client column was NOT NULL, any insert query to that table will now fail in the first place, because Hibernate won't provide any data for the orphan column (Which still has it's NOT NULL constraint) anymore.
Hibernate will not touch data types of existing columns. So, if you change a property type from String to Date - Hibernate will leave the column definition as varchar.
Hibernate does not remove columns of which you deleted the property, leading to data-polution and worst-case (The constraints remain in place) to no longer working applications.
If you create additiional constriants on existing columns - hibernate will not create them, because the column already existed before. (You might miss important contraints on the production db you added on existing columns)
So, perform your updates on your own is safer. If you have to take into account what hibernate is doing and what not - you'd better do it on your own from the scratch.

Can we generate primary keys automatically without JDBC queries or calls?

I am trying to generate alpha-numberic (e.g. TESLA1001) primary keys automatically in Hibernate. I am currently using Oracle database, so I have a JDBC call to my_sequence.NEXTVAL (1002) to increment number and append to the prefix (TESLA).
We are considering MySQL as an option, but they do not support sequences. So I am forced to re-write the Custom ID generation technique using JDBC call to a stored procedure.
Is there any way I can have a generic implementation to generate custom primary keys without the use of JDBC and database dependent queries? So, in future, if I need to test my application with MSSQL, I need to change my hiberate configuration only and things work fine!
Because you need a way to coordinate the sequence number, you'll always have to use a centralized sequence generator. An alpha-numerical primary key will perform worse on indexing than a UUID generator.
If I were you, I'd switch to UUID identifers which are both unique and portable across all major RDBMS.

Liquibase - common columns?

In my db every table has 4 common columns - DATE_CREATED, USER_CREATED, DATE_MODIFIED, USER_MODIFIED, and I want to propagate this rule to all new tables implicitly.
Is it possible to do it without having to generate liquibase script manually?
This is not possible using liquibase (as far as I know).
The reason for this is simple:
What if you change your mind and add/remove one of the default columns later? If you want to change all tables then this is not possible with liquibase as this would mean changing all changesets, which is not allowed.
Use a DSL to generate your liquibase scripts then you can add a certain set of columns to every entity but an automatic way would be difficult with the way liquibase works.
There is nothing built into Liquibase to support this.
Your easiest option would be to use XML document entities which is purely XML-level and therefore transparent to Liquibase. They would allow you to attach common XML into your changelog files.
A more complex approach would be to use the Liquibase extension system (http://liquibase.org/extensions) which allows you to redefine the logic to convert changeSets into SQL. That would allow you to inject any logic you want, including common data types, standard columns, or anything else.
I do not think so.
My suggesion, Dont add above mentioned 4 columns in all tables because there are possible to keep null values in all table for existing entries.
please create a table like Primary key id, table or entity name and your four column name.

performance improvement of queries against encrypted table without changing the application code

I have tagged this problem with both Oracle and Java because both Oracle and Java solutions would be accepted for this problem.
I am new to Oracle security and have been presented with the below problem to solve. I have done some research on the internet but I have had no luck so far. At first, I thought Oracle TDE might be helpful for my problem but here: Can Oracle TDE protect data from the DBA? it seems TDE doesn't protect data against DBA and this is an issue which is not to be tolerated.
Here is the problem:
I have a table containing millions of records. I have a Java application which queries this table using equality or range criteria against a column in the table which is the primary key column of the table. The primary key column contains sensitive data and thus has been encrypted already. As the result, querying data using normal (i.e. decrypted) values from the application cannot use the primary key's unique index access path. I need to improve the queries' performance without any changes on the application code (application config can be modified if necessary but not the code). It would be OK to do any changes that are necessary on the database side as long as that column remains encrypted.
Oracle people please: What solution(s) do you suggest to this problem? How can I create an index on decrypted column values and somehow force Oracle to utilize this index? How can I use partitioning such as hash-partitioning? How about views? Any, Any solution?
Java people please: I myself have this very vague idea which is to create a separate application in between (i.e between the database and the application) which acts as a proxy that receives the queries from the application and replaces the decrypted values with encrypted values and sends it for the database, it then receives the response and return the results back to the application. The proxy should behave like a database so that it should be possible for the application to connect to it by changing the connection string in the configuration file only. Would this work? How?
Thanks for all your help in advance!
which queries this table using equality or range criteria against a column in the table which is the primary key column of the table
To find a specific value it's simple enough - you can store the data encrypted any way you like - even as a hash and still retrieve a specific value using an index. But as per my comment elsewhere, you can't do range queries without either:
decrypting each and every row in the table
or
using an algorithm that can be cracked in a few seconds.
Using a linked list (or a related table) to define order instead of an algorithm with intrinsic ordering would force a brute force check on a much larger set of values - but it's nowhere near as secure as a properly encrypted value.
It doesn't matter if you use Oracle, Java or pencil and paper. Might be possible using quantum computing - but if you can't afford to ensure the security of your application / pay for good advice from an expert cryptographer, then you certainly won't be able to afford that.
How can I create an index on decrypted column values and somehow force Oracle to utilize this index?
Maybe you could create a function based index in which you index the decrypted value.
create index ix1 on tablename (decryptfunction(pk1));

Categories

Resources