I want to map in JPA 2.1 (on Hibernate 5.0.2 with MySQL database if that's relevant) a single table to two classes. I know that's done with SINGLE_TABLE inheritance + #DiscriminatorColumn and #DiscriminatorValue.
However, I wanted to discriminate based on a boolean column (well, boolean field in the mapping, I'm not sure how the database handles that). DiscriminatorType only contains 3 values (String, Char, and Integer) none of which seem particularly correct for my requirement. I could, I suppose, change my discriminator column to a more standard type, but I really do only need a boolean distinction and do not care how does the database store that info.
While a good workaround for MySQL 5.5 (which I would imagine looks something like use Char and write "0" and "1" as values, due to how it stores values it'll cast correctly) would be appreciated, I feel like database agnostic solution is in order.
Related
I'm trying to insert some code into a database. But, i have encountered a problem while working with models that are subclasses of each other. I have a list that holds all these subclases.
List<Metric> metrics = experiment.getMetrics();
for(Metric m : metrics) {
int id = m.getId();
// type checking code
}
Metric has sublcases of Rating and Quantity. Each of these in turn have there own uniquely defined tables. I am conflicted over the idea of using type checking. But I don't see any immediate solution. One alternative, which doesn't seem any better, would be to create a new column in the Metric table called metric_type. But this would lead to something quite similar to type checking. Any suggestions?
You have encountered Object-relational impedance mismatch due to mapping between not fully compatible systems. Since inheritance is not possible between tables in the relational model you will have to sacrifice something in the object model that uses inheritance. There will be edge cases no matter what you do unless you switch to an object database.
If you define a custom CRUD operations in classes that extend Metric loading entites can be tricky. What exactly will be loaded by Metric.get(id) if each table has it's own PK sequence and both Rating and Quantity can have the same numeric PK value.
You can take a look on how JPA solves this problem. It uses custom annotations e.g. #MappedSuperclass and #Entity. I guess that's a form of type checking.
I wouldn't suggest you to type check
The OOP way to solve this would be to make an insert method in the Metric class.
Then override the method both in Rating and Quality with the appropriate code that inserts the object in the respective table.
for(Metric m : metrics) {
int id = m.getId();
m.insert();
}
Inside your loop simply call insert and due to late-binding the appropriate method will be called and the right code will be executed.
I have a situation where I'm using Hibernate (5.2.16) to map a table and one of the column values is constructed via a database function that takes the values of two other properties.
For some background, this is a SDE spacial table with a ST_GEOMETRY column. As far as I can tell, this isn't compatible with the two types of spacial APIs supported by Hibernate, but even if it was, I'm not doing any spacial manipulation, so I don't really need them, I just want to insert and update the geometry column.
I have absolutely no control over the structure of the table because it's dictated by another group using another tool (GIS).
Things I've tried:
Using a Hibernate UserType. The problem with this is that I only see a way to get and set the value with a PreparedStatement, without the ability to dictate actual SQL used.
basic-custom-type
Using a Hibernate ColumnTransformer. This gives me direct control over the SQL used, but I can't use the values of two other properties in the SQL.
mapping-column-read-and-write
#Column(name="LATITUDE")
private BigDecimal latitude;
#Column(name="LONGITUDE")
private BigDecimal longitude;
#ColumnTransformer(
read="sde.st_astext(shape)",
write="sde.st_transform(sde.st_point(LONGITUDE,LATITUDE, 4326), 3857)"
)
#Generated(value=GenerationTime.ALWAYS)
#Column(name="SHAPE")
private String shape;
I get:
org.hibernate.AnnotationException: #WriteExpression must contain exactly one value placeholder ('?') character: property [shape] and column [SHAPE]
I've looked at Generated columns, but those are for values generated by the database.
mapping-generated
I've looked at Formula columns, but those are for values calculated and usable in Java, but aren't inserted or updated. mapping-column-formula
#Formula(value="sde.st_astext(shape)")
private String shape;
It's useful for some things, but I can't insert or update this.
I'm hoping that I've missed something. At this point I'm considering non-Hibernate/JPA solutions. This would be relatively easy with raw SQL and JDBC, but the rest of the table would be annoying and not match the rest of my code. I'd also have to do my own dirty checking and stuff.
You can use the Hibernate database generated value. It allows you to call database functions to generate entity properties.
This is a database-specific answer, but given your GIS problem domain and the dominance of PostGIS it may be relevant (if you use PostgreSQL and your DBA is OK with an upgrade).
PosgreSQL 12 introduces generated columns which you could define using something similar to the following:
#Column(columnDefinition = "GEOMETRY GENERATED ALWAYS AS st_transform(st_point(LONGITUDE,LATITUDE, 4326), 3857)) STORED")
I'm looking for opinions so I guess this is a 'which is better' question. I have a webapp build in Javascript/jQuery and struts that uses Hibernate to access data in a relational DB (MySQL). When an object/database field has a limited set of strings for values, is it better to use the full string in the object/DB or a 'code' for that string, like a single CHAR instead of the entire string?
class User {
int id;
String userName;
String type; // Values of 'Administrator', 'Regular'
OR
char type // Values of 'A', 'R'
OR
char type // Values of 'A', 'R'
String typeString; // Can be returned on the fly based on 'type' or by DB in SQL CASE statement
}
If the database has the full text string, then its easy coding all the way around, but its wasting the space (in the DB, data transfer) on something that only has a few values.
If the database has just a 'code' then when presenting this field to a user ( like in a grid of existing users, or a dropdown selection list when creating a new user ) the char value must be converted to the full string. Then the question is where should that conversion be done? It could be at the DB level where Hibernate can fill in the full string value from a CASE statement. This saves DB space, but not in data transfer or memory. It could be at the object level where its done in the getter/setter for the 'type' field. Or it could be all the way in the GUI where Javascript converts the 'char' to the appropriate string for the user to see.
Also... if either method is OK to use, what might influence the choice you make? The number of different values? The max length of the strings? How many rows are expected in the table?
I'm sure every DB/programmer has come across this situation many times and probably has a preference.
If you only have a fixed set of user types like Admin and Regular, I think it will easier to use a static hashmap in your code and just store A and R in your code. Something like:
static HashMap<Character,String> userRoles = new HashMap<>();
static{
userRoles.put("A","Admin");
userRoles.put("R", "Regular");
}
When ever you get result from DB, you can just do userRoles.get(type) to check the actual type. This saves space and also it's readable.
I would put the full name in the database alongside an associated short code or ID in some kind of lookup table. Use the shortcode/ID as the primary key for the lookup table, and as a foreign key from other tables. If someone needs to investigate the database layer, or someone needs to use the database for reporting, data warehousing, or analytics this will simplify things greatly.
It's commonly seen as bad practice to name variables, database tables, database columns, functions, etc. with unclear names or abbreviations that not everyone will understand - short codes like this should be seen the same way.
I think its better to do the conversion from the typecode to type (and vice versa) as close to database interaction as possible - in this case Hibernate. This is because your application logic would become more readable and intuitive if it uses the explicit types.
In my opinion- if(BMW.equals(carTypeCode)) {} is lot more readable than if("X".equals(carTypeCode)) {}.
I am not very familiar with Hibernate, but it would be awesome if you could leverage Hibernate for the mapping of String to DB representation and vice versa (maybe using CASE as you mentioned). Personally, I would probably have modeled these Strings as enums and used something like Hibernate Enum Type mapping. Also, you should think about making these type codes a little bit readable by making them at least few chars because these may come in handy when you are debugging some issue by looking at DB dump and you don't have to consult your type-code to type conversion chart.
I don't think performance wise either would not impact much in the average case.
I have seen on the internet ways to map a postgresql enum to a java enum. But all the methods i have seen do not fare well when i modify the enum in the DB. What i would want to do is create the java enum values at runtime by querying the db or something along those line. How would it be best to proceed?
For example in the db i have the enum { 'a','b','c','d'} and i manage to change it in the db one day to {'a','x','d','e'}. Is there any good way to make sure i do not get consistency problems with enum in java.(Obviously manual update IS my last choice). I'm using a pre-9.1 pg db if it matters
If you're updating enum definitions then you shouldn't be using enums. Use a lookup-table with a foreign key reference.
Enums are really for cases where you don't expect the enum to change, and where you're prepared to do significant work when they do change. In this case it would be entirely reasonable to expect to have to update all code that uses the enum if you change the enum's definition.
In legacy database tables we have numbered columns like C1, C2, C3, C100 or M1, M2, M3, M100.
This columns represent BLOB data.
It is not possible to change anything it this database.
By using JPA Embeddable we map all of the columns to single fields. And then during embedding we override names by using 100 override annotations.
Recently we have switched to Hibernate and I've found things like UserCollectionType and CompositeUserType. But I hadn't found any use cases that are close to mine.
Is it possible to implement some user type by using Hibernate to be able to map a bundle of columns to a collection without additional querying?
Edit:
As you probably noticed the names of columns can differ from table to table. I want to create one type like "LegacyArray" with no need to specify all of the #Columns each time I use this type.
But instead I'd use
#Type(type = "LegacyArrayUserType",
parameters =
{
#Parameter(name = "prefix", value = "A"),
#Parameter(name = "size", value = "128")
})
List<Integer> legacyA;
#Type(type = "LegacyArrayUserType",
parameters =
{
#Parameter(name = "prefix", value = "B"),
#Parameter(name = "size", value = "64")
})
List<Integer> legacyB;
I can think of a couple of ways that I would do this.
1. Create views for the collection information that simulates a normalized table structure, and map it to Hibernate as a collection:
Assuming your existing table is called primaryentity, I would create a view that's similar to the following:
-- untested SQL...
create view childentity as
(select primaryentity_id, c1 from primaryentity union
select primaryentity_id, c2 from primaryentity union
select primaryentity_id, c3 from primaryentity union
--...
select primaryentity_id, c100 from primaryentity)
Now from Hibernate's perspective, childentity is just a normalized table that has a foreign key to primarykey. Mapping this should be pretty straight forward, and is covered here:
http://docs.jboss.org/hibernate/stable/core/reference/en/html/collections.html
The benefits of this approach:
From Hibernate's point of view, the tables are normalized, it's a fairly simple mapping
No updates to your existing tables
The drawbacks:
Data is read-only, I don't think your view can be defined in an updatable manner (I could be wrong)
Requires change to the database, you may need to create lots of views
Alternately, if your DBA won't even let you add a view to the database, or if you need to perform updates:
2. Use Hibernate's dynamic model mapping facility to map your C1, C2, C3 properties to a Map, and have some code you your DAO layer do the appropriate conversation between the Map and the Collection property:
I have never done this myself, but I believe Hibernate does allow you to map tables to HashMaps. I'm not sure how dynamically Hibernate allows you to do this (i.e., Can you get away with simply specifying the table name, and having Hibernate automatically map all the columns?), but it's another way I can think of doing this.
If going with this approach though, be sure to use the data access object pattern, and ensure that the internal implementation (use of HashMaps) is hidden from the client code. Also be sure to check before writing to the database that the size of your collection does not exceed the number of available columns.
The benefits of this approach:
No change to the database at all
Data is updatable
O/R Mapping is relatively simple
The drawbacks:
Lots of plumbing in the DAO layer to map the appropriate types
Uses experimental Hibernate features that may change in the future
Personally, I think that design sounds like it breaks first normal form for relational databases. What happens if you need C101 or M101? Change your schema again? I think it's very intrusive.
If you add Hibernate to the mix it's even worse. Adding C101 or M101 means having to alter your Java objects, your Hibernate mappings, everything.
If you have 1:m relationships with C and M tables, you'd be able handle the cases I just cited by adding additional rows. Your Java objects contain Collection<C> or Collection<M>. Your Hibernate mappings are one-to-many that don't change.
Maybe the reason that you don't see any Hibernate examples to match your case because it's a design that's not recommended.
If you must, maybe you should look at Hibernate Component Mapping.
UPDATE: The fact that this is legacy is duly noted. My point in bringing up first normal form is as much for others who might find this question in the future as it is for the person who posted the question. I would not want to answer the question in such a way that it silently asserted this design as "good".
Pointing out Hibernate component mapping is pertinent because knowing the name of what you're looking for can be the key when you're searching. Hibernate allows an object model to be finer grained than the relational model it maps. You are free to model a denormalized schema (e.g., Name and Address objects as part of a larger Person object). That's just the name they give such a technique. It might help find other examples as well.
Sorry if I'm misunderstanding your problem here, I don't know much about Hibernate. But couldn't you just concatenate during selection from database to get something like what you want?
Like:
SELECT whatever
, C1||C2||C3||C4||...||C100 AS CDATA
, M1||M2||M3||M4||...||M100 AS MDATA
FROM ...
WHERE ...
(Of course, the concatenation operator differs between RDBMSs.)
[EDIT] I suggest to use a CompositeUserType. Here is an example. There is also a good example on page 228f in the book "Java Persistence With Hibernate".
That allows you to handle the many columns as a single object in Java.
The mapping looks like this:
#org.hibernate.annotations.Columns(columns = {
#Column(name="C1"),
#Column(name="C2"),
#Column(name="C3"),
...
})
private List<Integer> c;
Hibernate will load all columns at once during the normal query.
In your case, you must copy the int values from the list into a fixed number of columns in nullSafeSet. Pseudocode:
for (int i=1; i<numColumns; i++)
if (i < list.size())
resultSet.setInt(index+i, list.get(i));
else
resultSet.setNull(index+i, Hibernate.INTEGER.sqlType());
In nullSafeGet you must create a list and stop adding elements when a column is NULL. For additional safety, I suggest to create your own list implementation which doesn't allow to grow beyond the number of columns (inherit from ArrayList and override ensureCapacity()).
[EDIT2] If you don't want to type all the #Column annotations, use a code generator for them. That can be as simple as script which you give a name and a number and it prints #Column(...) to System.out. After the script ran, just cut&paste the data into the source.
The only other solution would be to access the internal Hibernate API to build that information at runtime but that API is internal, so a lot of stuff is private. You can use Java reflection and setAccessible(true) but that code probably won't survive the next update of Hibernate.
You can use UserTypes to map a given number of columns to any type you wish. This could be a collection if (for example) for collections are always bounded in size by a known number of items.
It's been a while (> 3 years) since I used Hibernate so I'm pretty rusty but I recall it being very easy to do; your BespokeUserType class gets passed the ResultSet to hydrate your object from it.
I too have never used Hibernate.
I suggest writing a small program in an interpreted language (such as Python) in which you can execute a string as if it were a command. You could construct a statement which takes the tedious work out of doing what you want to do manually.