Let's say I have an entity with a very long name:
#Entity
public class SupercalifragilisticexpialidociousPanda
{
...
}
Using Hibernate to persist it to a Postgres DB works flawlessly. Oracle, however, doesn't allow for table/column/index names longer than 30 characters.
That should be easy to fix, since i can just specify the table name manually, like this:
#Entity
#Table(name="SuperPanda")
public class SupercalifragilisticexpialidociousPanda
{
...
}
Now everything is back to working perfectly... except that any references I have to the entity in other tables still use the long class name ("SupercalifragilisticexpialidociousPanda") instead of the short table name ("SuperPanda").
For instance, if the entity has an embedded ElementCollection, like this:
#ElementCollection
private Set<String> nicknames;
Hibernate will try to create a DB like this: create table SupercalifragilisticexpialidociousPanda_nicknames, which will naturally cause an ORA-00972: identifier is too long error.
The same thing also happens for #OneToOne associations, where the lookup column would be called something like supercalifragilisticexpialidociousPanda_uuid, which also fails with oracle.
Now, one option would be to add a #CollectionTable(name="SuperPanda_nicknames") and #Column(name="...") annotation manually to every field that references this entity, but that's a lot of work and really error-prone.
Is there a way to just tell Hibernate once to use the short name everywhere a reference to the entity is required?
I also tried setting the entity name, like this:
#Entity(name="SuperPanda")
#Table(name="SuperPanda")
public class SupercalifragilisticexpialidociousPanda
{
...
}
... but it doesn't fix the issue.
What does one normally do in such a case?
Usually people give names for each database thing (table, column, index) by themselves. Letting Hibernate decide for you can lead to problem in future when you decide to refactor something.
All reference can be configured one way or another to use names you decide to use.
Ask specific question in case you can figure out the way to do it yourself.
Related
Imagine that I have a simple entity as follows:
#Entity
#Table(name = "PERSON")
public class Person {
#Id
#Column(name = "NAME")
private String name;
#Column(name = "GENDER")
private String gender;
}
And two tables, the actual table holding the information and a lookup table.
TABLE PERSON (
NAME VARCHAR2 NOT NULL,
GENDER INT NOT NULL);
TABLE GENDER_LOOKUP (
GENDER_ID INT NOT NULL,
GENDER_NAME VARCHAR2 NOTNULL);
I want to save the information from my entity into the table, so that the String field gender is automatically converted to the corresponding gender int, using the lookup table as a reference. I thought of two approaches, but I was wondering if there was a more efficient way.
Create an enum and use ordinal enum to persist. I would rather avoid this because I'd like to have only one "source of truth" for the information and for various business reasons, it has to be a lookup table.
Use the #Converter annotation and write a custom converter. I think that this would require me to query the table to pull out the relevant row, so it would mean that I would have to make a JPA call to the database every time something was converted.
I'm currently planning to use 2, but I was wondering if there was any way to do it within the database itself, since I assume using JPA to do all of these operations has a higher cost than if I did everything in the database. Essentially attempt to persist a String gender, and then the database would look at the lookup table and translate it to the correct Id and save it.
I'm specifically using openJpa but hopefully this isn't implementation specific.
Since you seriously considered using enum, it means that GENDER_LOOKUP is static, i.e. the content doesn't change while the program is running.
Because of that, you should use option 2, but have the converter cache/load all the records from GENDER_LOOKUP on the first lookup. That way, you still only have one "source of truth", without the cost of hitting the database on every lookup.
If you need to add a new gender1, you'll just have to restart the app to refresh the cache.
1) These days, who know what new genders will be needed.
I got a generics class that contains runQuery method that has following code setup:
public Object runQuery(String query) {
Query retVal = getSession().createSQLQuery(query);
return retVal.list();
}
I am trying to figure out how to transpose the returned values into a list of Check objects (List):
public class Check {
private int id;
private String name;
private String confirmationId;
//getters and setters
}
Most queries that i run are actually stored procs in mysql. I know of native queries and resultTransforms (which if implemented would mean i have to change my generics and not after that).
Any ideas how i can accomplish this with current setup?
You can find tutorials on ORMs ( What is Object/relational mapping(ORM) in relation to Hibernate and JDBC? ).
Basically, you add annotation to your Check class to tell hibernate which java fields matches which DB field, make a JPQL (it looks like SQL) request and it gets your object and maps them from DB to POJO.
It's a broad subject, this is a good start: https://www.tutorialspoint.com/hibernate/hibernate_quick_guide.htm
It will require some configuration, but that's worth it. Here's one tutorial on annotation based configuration: https://www.tutorialspoint.com/hibernate/hibernate_annotations.htm but with less explanations on how ORM works (there's also EclipseLink as an ORM)
Else, you could make you're own mapper, which takes values from a ResultSet, and set them in your class. For a lot of reason, I would recommand using an ORM than this method (Except maybe if you have only one class that is stored in the DB, which I doubt).
I would like to avoid having column names as strings in the code. Is there any other way to accomplish this?:
String query = "SELECT c.foo1.columnA, c.foo1.foo2.columnB FROM Table c";
session.createQuery(query).list();
I'm able to iterate over a column as string like c.foo1.foo2.columnB by splitting and getting the ClassMetadata, the property Type and other Hibernate functions until I reach the last element. However, I can't think a way to get a column string from Java beans, iterating through properties too.
Not sure what is the intention. Couple of thoughts
If you are worried about possibility of property names being wrong, current day IDEs does a good job by validating the property names in JPA queries
Object reflection can give you the property names. But not necessarily all properties are mapped to columns. You can look at this and use it along with bean property names via reflection.
Hope that helps.
There is no way to achieve what you are looking for. But, if your concern is correctness of these queries and worry that the problem will not be known until the execution hits this, you could use NamedQuery
#Entity
#NamedQuery(
name="findAllEmployeesByFirstName",
queryString="SELECT OBJECT(emp) FROM Employee emp WHERE emp.firstName = 'John'"
)
public class Employee implements Serializable {
...
}
Usage
List employees = em.createNamedQuery("findAllEmployeesByFirstName").getResultList();
The benefit is that queries defined in NamedQuery annotations are compiled to actual SQL at start up time. So incorrect field references(typo etc) will cause a start up error and the application will not start.
Another option will be as mentioned in the other answer to trust in a good IDE to refactor all occurrences properly when you rename fields (Idea does a great job at this, so would any other IDE)
EDIT: I do not think there is any performance degradation with named queries. Rather it may appear to be faster as compiled queries are cached(very subjective)
Finally, its better to use the actual query as-is as mentioned in comments. It is far more readable and debug in its context. If you are concerned about correctness, unit-test the heck out of it and be confident.
I'm on a project that uses the latest Spring+Hibernate for persistence and for implementing a REST API.
The different tables in the database contain lots of records which are in turn pretty big as well. So, I've created a lot of DAOs to retrieve different levels of detail and their accompanying DTOs.
For example, if I have some Employee table in the database that contains tons of information about each employee. And if I know that any client using my application would benefit greatly from retrieving different levels of detail of an Employee entity (instead of being bombarded by the entire entity every time), what I've been doing so far is something like this:
class EmployeeL1DetailsDto
{
String id;
String firstName;
String lastName;
}
class EmployeeL2DetailsDto extends EmployeeL1DetailsDto
{
Position position;
Department department;
PhoneNumber workPhoneNumber;
Address workAddress;
}
class EmployeeL3DetailsDto extends EmployeeL2DetailsDto
{
int yearsOfService;
PhoneNumber homePhoneNumber;
Address homeAddress;
BidDecimal salary;
}
And So on...
Here you see that I've divided the Employee information into different levels of detail.
The accompanying DAO would look something like this:
class EmployeeDao
{
...
public List<EmployeeL1DetailsDto> getEmployeeL1Detail()
{
...
// uses a criteria-select query to retrieve only L1 columns
return list;
}
public List<EmployeeL2DetailsDto> getEmployeeL2Detail()
{
...
// uses a criteria-select query to retrieve only L1+L2 columns
return list;
}
public List<EmployeeL3DetailsDto> getEmployeeL3Detail()
{
...
// uses a criteria-select query to retrieve only L1+L2+L3 columns
return list;
}
.
.
.
// And so on
}
I've been using hibernate's aliasToBean() to auto-map the retrieved Entities into the DTOs. Still, I feel the amount of boiler-plate in the process as a whole (all the DTOs, DAO methods, URL parameters for the level of detail wanted, etc.) are a bit worrying and make me think there might be a cleaner approach to this.
So, my question is: Is there a better pattern to follow to retrieve different levels of detail from a persisted entity?
I'm pretty new to Spring and Hibernate, so feel free to point anything that is considered basic knowledge that you think I'm not aware of.
Thanks!
I would go with as little different queries as possible. I would rather make associations lazy in my mappings, and then let them be initialized on demand with appropriate Hibernate fetch strategies.
I think that there is nothing wrong in having multiple different DTO classes per one business model entity, and that they often make the code more readable and maintainable.
However, if the number of DTO classes tends to explode, then I would make a balance between readability (maintainability) and performance.
For example, if a DTO field is not used in a context, I would leave it as null or fill it in anyway if that is really not expensive. Then if it is null, you could instruct your object marshaller to exclude null fields when producing REST service response (JSON, XML, etc) if it really bothers the service consumer. Or, if you are filling it in, then it's always welcome later when you add new features in the application and it starts being used in a context.
You will have to define in one way or another the different granularity versions. You can try to have subobjects that are not loaded/set to null (as recommended in other answers), but it can easily get quite awkward, since you will start to structure your data by security concerns and not by domain model.
So doing it with individual classes is after all not such a bad approach.
You might want to have it more dynamic (maybe because you want to extend even your data model on db side with more data).
If that's the case you might want to move the definition out from code to some configurations (could even be dynamic at runtime). This will of course require a dynamic data model also on Java side, like using a hashmap (see here on how to do that). You gain thereby a dynamic data model, but loose the type safety (at least to a certain extend). In other languages that probably would feel natural but in Java it's less common.
It would now be up to your HQL to define on how you want to populate your object.
The path you want to take depends now a lot on the context, how your object will get used
Another approach is to use only domain objects at Dao level, and define the needed subsets of information as DTO for each usage. Then convert the Employee entity to each DTO's using the Generic DTO converter, as I have used lately in my professional Spring activities. MIT-licenced module is available at Maven repository artifact dtoconverter .
and further info and user guidance at author's Wiki:
http://ratamaa.fi/trac/dtoconverter
Quickest idea you get from the example page there:
Happy hunting...
Blaze-Persistence Entity Views have been created for exactly such a use case. You define the DTO structure as interface or abstract class and have mappings to your entity's attributes. When querying, you just pass in the class and the library will take care of generating an optimized query for the projection.
Here a quick example
#EntityView(Cat.class)
public interface CatView {
#IdMapping("id")
Integer getId();
String getName();
}
CatView is the DTO definition and here comes the querying part
CriteriaBuilder<Cat> cb = criteriaBuilderFactory.create(entityManager, Cat.class);
cb.from(Cat.class, "theCat")
.where("father").isNotNull()
.where("mother").isNotNull();
EntityViewSetting<CatView, CriteriaBuilder<CatView>> setting = EntityViewSetting.create(CatView.class);
List<CatView> list = entityViewManager
.applySetting(setting, cb)
.getResultList();
Note that the essential part is that the EntityViewSetting has the CatView type which is applied onto an existing query. The generated JPQL/HQL is optimized for the CatView i.e. it only selects(and joins!) what it really needs.
SELECT
theCat.id,
theCat.name
FROM
Cat theCat
WHERE theCat.father IS NOT NULL
AND theCat.mother IS NOT NULL
I have a model class that references another model class and seem to be encountering an issue where the #OneToOne annotation fixes one problem but causes another. Removing it causes the inverse.
JPA throws "multiple assignments to same column" when trying to save changes to model. The generated SQL has duplicate columns and I'm not sure why.
Here's a preview of what the classes look like:
The parent class references look like this:
public class Appliance {
public Integer locationId;
#Valid
#OneToOne
public Location location;
}
The child Location class has an id field and a few other text fields -- very simple:
public class Location {
public Integer id;
public String name;
}
When I attempt to perform a save operation, does anyone know why JPA is creating an insert statement for the Appliance table that contains two fields named "location_id"?
I need to annotate the reference to the child class with #OneToOne if I want to be able to retrieve data from the corresponding database table to display on screen. However, If I remove #OneToOne, the save works fine, but it obviously won't load the Location data into the child object when I query the db.
Thanks in advance!
It appears you did not define an #InheritanceType on the parent Class. Since you did not, the default is to combine the the parent and the child class into the same Table in the Single Table Strategy.
Since both entities are going into the same table, I think that #OneToOne is trying to write the id twice - regardless of which side it is on.
If you want the parent to be persisted in its own table, look at InheritanceType.JOINED.
Or consider re-factoring so that you are not persisting the parent separately as JOINED is not considered a safe option with some JPA providers.
See official Oracle Documentation below.
http://docs.oracle.com/javaee/7/tutorial/doc/persistence-intro002.htm#BNBQR
37.2.4.1 The Single Table per Class Hierarchy Strategy
With this strategy, which corresponds to the default InheritanceType.SINGLE_TABLE, all classes in the hierarchy are mapped to a single table in the database. This table has a discriminator column containing a value that identifies the subclass to which the instance represented by the row belongs.
In OpenJPA, according to the docs (http://openjpa.apache.org/builds/1.0.1/apache-openjpa-1.0.1/docs/manual/jpa_overview_mapping_field.html), section 8.4, the foreign key column in a one-to-one mapping:
Defaults to the relation field name, plus an underscore, plus the name
of the referenced primary key column.
And the JPA API seems to concur with this (http://docs.oracle.com/javaee/6/api/javax/persistence/JoinColumn.html)
I believe this means that in a one-to-one mapping, the default column name for properties in a dependent class is parentClassFieldName_dependentClassFieldName (or location_id in your case). If that's the case, the location_id column you are defining in your Appliance class is conflicting with the location_id default column name which would be generated for your Location class.
You should be able to correct this by using the #Column(name="someColumnName") annotation and the #JoinColumn annotation on your #OneToOne relationship to force the column name to be something unique.
Ok gang, I figured it out.
Here's what the new code looks like, followed by a brief explanation...
Parent Class:
public class Appliance {
public Integer locationId;
#Valid
#OneToOne(cascade = CascadeType.ALL)
#JoinColumn(name="location_id", referencedColumnName="id")
public Location location;
}
Child Class:
public class Location {
public Integer id;
public String name;
}
The first part of the puzzle was the explicit addition of "cascade = CascadeType.ALL" in the parent class. This resolved the initial "multiple assignments to same column" by allowing the child object to be persisted.
However, I encountered an issue during update operations which is due to some sort of conflict between EBean and JPA whereby it triggers a save() operation on nested child objects rather than a cascading update() operation. I got around this by issuing an explicit update on the child object and then setting it to null before the parent update operation occurred. It's sort of a hack, but it seems like all these persistence frameworks solve one set of problems but cause others -- I guess that's why I've been old school and always rolled my own persistence code until now.