How to handle Target Unreachable in an elegant way - java

I have 2 questions regarding the so common Target Unreachable Exception.
What's the best practice to handle it, for example you have:
Country has City, City has Street.
- do you put in Country's constructor new City(), and in City's constructor new Street()
(so that you somehow have them into a centralized place, but always make objects which you might not need)
OR you initialize the objects in various places in your code where you need them ? (spread all over your code)
- and if the user doesn't type anything say for Street, in order to prevent the insertion of a blank row in the DB
you put the street back to null. Where's the best place to put it back to null ?
(say you have Cascade.ALL or Extended Context, otherwise you'd just not save it if you knew it's empty)
PS: why isn't JSF just instantiating what it needs, and Hibernate not persisting entities that have all persistent fields empty ?
For performance or why ? Yet again, is it bad to have empty rows in db, just with PK and FKs ?

I think it depends on the relationship between the entities in your app. In some cases I do load a related instance of another object in the constructor but only in the case where you wouldn't have one entity without the other.
One alternative is to create the objects lazily in the getter:
public class Country {
private City city;
public City getCity() {
if (this.city == null) {
this.city = new City();
}
return this.city;
}
}
As far as the PS questions JSF doesn't instantiate objects for you -- I'm not sure that would be desirable ... but if you use the lazy getter approach you effectively get the same thing. Hibernate persists an entity if it has been instantiated since it persists the current state of the persistable object model and if it didn't persist that entity it wouldn't be working as expected.
I typically don't worry about a few null rows as I choose to use Hibernate knowing that an ORM comes with some small cost in performance. To me it is still well worth it to enjoy the abstraction of persistence.

Related

Hibernate associations using too much memory

I have a table "class" which is linked to tables "student" and "teachers".
A "class" is linked to multiple students and teachers via foriegn key relationship.
When I use hibernate associations and fetch large number of entities(tried for 5000) i am seeing that it is taking 4 times more memory than if i just use foreign key place holders.
Is there something wrong in hibernate association?
Can i use any memory profiler to figure out what's using too much memory?
This is how the schema is:
class(id,className)
student(id,studentName,class_id)
teacher(id,teacherName,class_id)
class_id is foreign key..
Case #1 - Hibernate Associations
1)in Class Entity , mapped students and teachers as :
#Entity
#Table(name="class")
public class Class {
private Integer id;
private String className;
private Set<Student> students = new HashSet<Student>();
private Set<Teacher> teachers = new HashSet<Teacher>();
#OneToMany(fetch = FetchType.EAGER, mappedBy = "classRef")
#Cascade({ CascadeType.ALL })
#Fetch(FetchMode.SELECT)
#BatchSize(size=500)
public Set<Student> getStudents() {
return students;
}
2)in students and teachers , mapped class as:
#Entity
#Table(name="student")
public class Student {
private Integer id;
private String studentName;
private Class classRef;
#ManyToOne
#JoinColumn(name = "class_id")
public Class getClassRef() {
return classRef;
}
Query used :
sessionFactory.openSession().createQuery("from Class where id<5000");
This however was taking a Huge amount of memory.
Case #2- Remove associations and fetch seperately
1)No Mapping in class entity
#Entity
#Table(name="class")
public class Class {
private Integer id;
private String className;
2)Only a placeholder for Foreign key in student, teachers
#Entity
#Table(name="student")
public class Student {
private Integer id;
private String studentName;
private Integer class_id;
Queries used :
sessionFactory.openSession().createQuery("from Class where id<5000");
sessionFactory.openSession().createQuery("from Student where class_id = :classId");
sessionFactory.openSession().createQuery("from Teacher where class_id = :classId");
Note - Shown only imp. part of the code. I am measuring memory usage of the fetched entities via JAMM library.
I also tried marking the query as readOnly in case #1 as below, which does not improve memory usage very much ; just a very little. So that's not the solve.
Query query = sessionFactory.openSession().
createQuery("from Class where id<5000");
query.setReadOnly(true);
List<Class> classList = query.list();
sessionFactory.getCurrentSession().close();
Below are the heapdump snapshots sorted by sizes. Looks like the Entity maintained by hibernate is creating the problem..
Snapshot of Heapdump for hibernate associations program
Snapshot of heapdump for fetching using separate entities
You are doing a EAGER fetch with the below annotation. This will in turn fetch all the students without even you accessing the getStudents(). Make it lazy and it will fetch only when needed.
From
#OneToMany(fetch = FetchType.EAGER, mappedBy = "classRef")
To
#OneToMany(fetch = FetchType.LAZY, mappedBy = "classRef")
When Hibernate loads a Class entity containing OneToMany relationships, it replaces the collections with its own custom version of them. In the case of a Set, it uses a PersistentSet. As can be seen on grepcode, this PersistentSet object contains quite a bit of stuff, much of it inherited from AbstractPersistentCollection, to help Hibernate manage and track things, particularly dirty checking.
Among other things, the PersistentSet contains a reference to the session, a boolean to track whether it's initialized, a list of queued operations, a reference to the Class object that owns it, a string describing its role (not sure what exactly that's for, just going by the variable name here), the string uuid of the session factory, and more. The biggest memory hog among the lot is probably the snapshot of the unmodified state of the set, which I would expect to approximately double memory consumption by itself.
There's nothing wrong here, Hibernate is just doing more than you realized, and in more complex ways. It shouldn't be a problem unless you are severely short on memory.
Note, incidentally, that when you save a new Class object that Hibernate previously was unaware of, Hibernate will replace the simple HashSet objects you created with new PersistentSet objects, storing the original HashSet wrapped inside the PersistentSet in its set field. All Set operations will be forwarded to the wrapped HashSet, while also triggering PersistentSet dirty tracking and queuing logic, etc. With that in mind, you should not keep and use any external references to the Set from before saving, and should instead fetch a new reference to Hibernate's PersistentSet instance and use that if you need to make any changes (to the set, not to the students or teachers within it) after the initial save.
Regarding the huge memory consumption you are noticing, one potential reason is Hibernate Session has to maintain the state of each entity it has loaded the form of EntityEntry object i.e., one extra object, EntityEntry, for each loaded entity. This is needed for hibernate automatic dirty checking mechanism during the flush stage to compare the current state of entity with its original state (one that is stored as EntityEntry).
Note that this EntityEntry is different from the object that we get to access in our application code when we call session.load/get/createQuery/createCriteria. This is internal to hibernate and stored in the first level cache.
Quoting form the javadocs for EntityEntry :
We need an entry to tell us all about the current state of an object
with respect to its persistent state Implementation Warning: Hibernate
needs to instantiate a high amount of instances of this class,
therefore we need to take care of its impact on memory consumption.
One option, assuming the intent is only to read and iterate through the data and not perform any changes to those entities, you can consider using StatelessSession instead of Session.
The advantage as quoted from Javadocs for Stateless Session:
A stateless session does not implement a first-level cache nor
interact with any second-level cache, nor does it implement
transactional write-behind or automatic dirty checking
With no automatic dirty checking there is no need for Hibernate to create EntityEntry for each entity of loaded entity as it did in the earlier case with Session. This should reduce pressure on memory utilization.
Said that, it does have its own set of limitations as mentioned in the StatelessSession javadoc documentation.
One limitation that is worth highlighting is, it doesn't lazy loading the collections. If we are using StatelessSession and want to load the associated collections we should either join fetch them using HQL or EAGER fetch using Criteria.
Another one is related to second level cache where it doesn't interact with any second-level cache, if any.
So given that it doesn't have any overhead of first-level cache, you may want to try with Stateless Session and see if that fits your requirement and helps in reducing the memory consumption as well.
Yes, you can use a memory profiler, like visualvm or yourkit, to see what takes so much memory. One way is to get a heap dump and then load it in one of these tools.
However, you also need to make sure that you compare apples to apples. Your queries in case#2 sessionFactory.openSession().createQuery("from Student where class_id = :classId");
sessionFactory.openSession().createQuery("from Teacher where class_id = :classId");
select students and teachers only for one class, while in case #1 you select way more. You need to use <= :classId instead.
In addition, it is a little strange that you need one student and one teacher record per one class. A teacher can teach more than one class and a student can be in more than one class. I do not know what exact problem you're solving but if indeed a student can participate in many classes and a teacher can teach more than one class, you will probably need to design your tables differently.
Try #Fetch(FetchMode.JOIN), This generates only one query instead of multiple select queries. Also review the generated queries. I prefer using Criteria over HQL(just a thought).
For profiling, use freewares like visualvm or jconsole. yourkit is good for advanced profiling, but it is not for free. I guess there is a trail version of it.
You can take the heapdump of your application and analyze it with any memory analyzer tools to check for any memory leaks.
BTW, I am not exactly sure about the memory usage for current scenario.
Its likely the reason is the bi-directional link from Student to Class and Class to Students. When you fetch Class A (id 4500), The Class object must be hydrated, in turn this must go and pull all the Student objects (and teachers presumably) associated with this class. When this happens each Student Object must be hydrated. Which causes the fetch of every class the Student is a part of. So although you only wanted class A, you end up with:
Fetch Class A (id 4900)
Returns Class A with reference to 3 students, Student A, B, C.
Student A has ref to Class A, B (id 5500)
Class B needs hydrating
Class B has reference to Students C,D
Student C needs hydrating
Student C only has reference to Class A and B
Student C hydration complete.
Student D needs hydrating
Student D only has reference to Class B
Student B hydration complete
Class B hydration complete
Student B needs hydrating (from original class load class A)
etc... With eager fetching, this continues until all links are hydrated. The point being that its possible you end up with Classes in memory that you didn't actually want. Or whose id is not less than 5000.
This could get worse fast.
Also, you should make sure you are overriding the hashcode and equals methods. Otherwise you may be getting redundant objects, both in memory and in your set.
One way to improve is either change to LAZY loading as other have mentioned or break the bidirectional links. If you know you will only ever access students per class, then don't have the link from student back to class. For student/class example it makes sense to have the bidirectional link, but maybe it can be avoided.
as you say you "I want "all" the collections". so lazy-loading won't help.
Do you need every field of every entity? In which case use a projection to get just the bits you want. See when to use Hibernate Projections.
Alternatively consider having minimalist Teacher-Lite and Student-Lite entity that the full-fat versions extend.

How to design models to access data hierarchy from up or bottom?

I know that the question sound weird but an example will clarify my thoughts.
Suppose I have 2 classes Employee and Department. Department has an Arraylist of employees. This way I can access employees in a specific department.
What do I have to do if I also need to access the employee department from the Employee object?I think that also adding the department to employee object is a bad practice?
This is the problem of navigability, and is a big concern in database design. There's a couple of ways to achieve this:
Brute force. If you have an Employee, scan the Departments to find which (if any) it belongs to. Obviously this is quite inefficient, and assumes you have a list of all the Departments available.
Indexing. Alongside your Department and Employee objects, maintain a map of Employees to Departments, and use this map to find the reverse look-ups. Of course this has some overheads, but the big worries are transactionality (an update may change the index, then fail before updating the Department) and consistency (what happens if a client tries to read the data after you've updated the Department but before you've updated the index?)
Double-linking. Require that the Department and Employee classes each have a reference to the other. This is tricky because it's fiddly constructing the references, and has the same transactionality and consistency problems. Also makes it extremely hard to make the references final (although I don't think you want that in this case anyway).
So, there are issues to consider before doing this, but it's not insurmountable - just make sure you've thought about thread-safety and fault-tolerance before you do it. Doubly-linked lists are a pretty common data structure which uses this pattern.
You can initialize like the following but it is considered bad practice in multi-threading applications but sometimes acceptable for specific use cases in single thread applications. One important thing to consider is if you pass the Department object as this to the Employee object technically the object is not fully constructed. Another thing to consider is how garbage collection will function in such a pattern.
public class Department {
List<Employee> myEmployeeList;
public Department() {
int size = 5;
myEmployeeList.add(new Employee(this));
}
}
public class Employee {
Department myDepartment;
public Employee(Department department) {
myDepartment = department;
}
}
You could also have a class that holds references to both Department and Employee and give Employee that reference. Or initialize Department in a method from Employee.

DTOs with different granularity

I'm on a project that uses the latest Spring+Hibernate for persistence and for implementing a REST API.
The different tables in the database contain lots of records which are in turn pretty big as well. So, I've created a lot of DAOs to retrieve different levels of detail and their accompanying DTOs.
For example, if I have some Employee table in the database that contains tons of information about each employee. And if I know that any client using my application would benefit greatly from retrieving different levels of detail of an Employee entity (instead of being bombarded by the entire entity every time), what I've been doing so far is something like this:
class EmployeeL1DetailsDto
{
String id;
String firstName;
String lastName;
}
class EmployeeL2DetailsDto extends EmployeeL1DetailsDto
{
Position position;
Department department;
PhoneNumber workPhoneNumber;
Address workAddress;
}
class EmployeeL3DetailsDto extends EmployeeL2DetailsDto
{
int yearsOfService;
PhoneNumber homePhoneNumber;
Address homeAddress;
BidDecimal salary;
}
And So on...
Here you see that I've divided the Employee information into different levels of detail.
The accompanying DAO would look something like this:
class EmployeeDao
{
...
public List<EmployeeL1DetailsDto> getEmployeeL1Detail()
{
...
// uses a criteria-select query to retrieve only L1 columns
return list;
}
public List<EmployeeL2DetailsDto> getEmployeeL2Detail()
{
...
// uses a criteria-select query to retrieve only L1+L2 columns
return list;
}
public List<EmployeeL3DetailsDto> getEmployeeL3Detail()
{
...
// uses a criteria-select query to retrieve only L1+L2+L3 columns
return list;
}
.
.
.
// And so on
}
I've been using hibernate's aliasToBean() to auto-map the retrieved Entities into the DTOs. Still, I feel the amount of boiler-plate in the process as a whole (all the DTOs, DAO methods, URL parameters for the level of detail wanted, etc.) are a bit worrying and make me think there might be a cleaner approach to this.
So, my question is: Is there a better pattern to follow to retrieve different levels of detail from a persisted entity?
I'm pretty new to Spring and Hibernate, so feel free to point anything that is considered basic knowledge that you think I'm not aware of.
Thanks!
I would go with as little different queries as possible. I would rather make associations lazy in my mappings, and then let them be initialized on demand with appropriate Hibernate fetch strategies.
I think that there is nothing wrong in having multiple different DTO classes per one business model entity, and that they often make the code more readable and maintainable.
However, if the number of DTO classes tends to explode, then I would make a balance between readability (maintainability) and performance.
For example, if a DTO field is not used in a context, I would leave it as null or fill it in anyway if that is really not expensive. Then if it is null, you could instruct your object marshaller to exclude null fields when producing REST service response (JSON, XML, etc) if it really bothers the service consumer. Or, if you are filling it in, then it's always welcome later when you add new features in the application and it starts being used in a context.
You will have to define in one way or another the different granularity versions. You can try to have subobjects that are not loaded/set to null (as recommended in other answers), but it can easily get quite awkward, since you will start to structure your data by security concerns and not by domain model.
So doing it with individual classes is after all not such a bad approach.
You might want to have it more dynamic (maybe because you want to extend even your data model on db side with more data).
If that's the case you might want to move the definition out from code to some configurations (could even be dynamic at runtime). This will of course require a dynamic data model also on Java side, like using a hashmap (see here on how to do that). You gain thereby a dynamic data model, but loose the type safety (at least to a certain extend). In other languages that probably would feel natural but in Java it's less common.
It would now be up to your HQL to define on how you want to populate your object.
The path you want to take depends now a lot on the context, how your object will get used
Another approach is to use only domain objects at Dao level, and define the needed subsets of information as DTO for each usage. Then convert the Employee entity to each DTO's using the Generic DTO converter, as I have used lately in my professional Spring activities. MIT-licenced module is available at Maven repository artifact dtoconverter .
and further info and user guidance at author's Wiki:
http://ratamaa.fi/trac/dtoconverter
Quickest idea you get from the example page there:
Happy hunting...
Blaze-Persistence Entity Views have been created for exactly such a use case. You define the DTO structure as interface or abstract class and have mappings to your entity's attributes. When querying, you just pass in the class and the library will take care of generating an optimized query for the projection.
Here a quick example
#EntityView(Cat.class)
public interface CatView {
#IdMapping("id")
Integer getId();
String getName();
}
CatView is the DTO definition and here comes the querying part
CriteriaBuilder<Cat> cb = criteriaBuilderFactory.create(entityManager, Cat.class);
cb.from(Cat.class, "theCat")
.where("father").isNotNull()
.where("mother").isNotNull();
EntityViewSetting<CatView, CriteriaBuilder<CatView>> setting = EntityViewSetting.create(CatView.class);
List<CatView> list = entityViewManager
.applySetting(setting, cb)
.getResultList();
Note that the essential part is that the EntityViewSetting has the CatView type which is applied onto an existing query. The generated JPQL/HQL is optimized for the CatView i.e. it only selects(and joins!) what it really needs.
SELECT
theCat.id,
theCat.name
FROM
Cat theCat
WHERE theCat.father IS NOT NULL
AND theCat.mother IS NOT NULL

hibernate jpa update two field on persisit and read from one only

one quick question for java hibernate/jpa users.
I have two tables(entities) A and B with relations as A has many B (one to many). Entity A has Set of values B in java.
Due to read performance issue i want to implement master-details denormalization, so i want to store raw Set object (maybe serialized) directly in entity A (because many to one relation cost me to much cpu time because of read by jpa (update is not an issue)).
The problem is, can i achieve something like that getBs always returns me denormalized object (so its fast) and addB adds new B to Set and updates denormalized object with new raw data that is prepared for faster read?
its oracle db.
entity example:
class A {
Long id,
String name;
Set<B> arrayOfBs;
byte[] denormalizedArrayOfB;
getArrayOfBs() {
return (Set<B>) denormalizedArrayOfB;
}
addArrayOfBs(B b) {
//persist b
// update and persist denormalizedArray with new b
}
//getters and setters...
}
class B {
Long id;
A reference;
String x;
String y;
//getters and setters...
}
That's complicated. There are better approaches to your problem:
You can simply replace the one-to-many association with a DAO query. So whenever you fetch the parent entities you won't be able to get the children collection (maybe they are way too many). But when you want to get a parent's children, you simply run a DAO query, which is also easier to filter.
You leave the children collection, but you use an in-memory cache to save the fully initialized object graph. This might sounds like a natural choice, but most likely you're going to trade consistency for performance.

Java: Pattern for updating all equal objects (in the same context) that don't share the same reference

In my java application I am using equal objects multiple times at different places. That means the equals method returns true, when comparing theses objects. Now I want to update one object and make the changes to all objects that are equal. Do you know if there is a pattern for that?
My concrete use case is:
I am using JSF, JPA and CDI. A user is on web page that allows him to edit the detached entity EntityA. The page is sessionscoped. EntityA has two references to an EntityB (also detached). These objects can be same. Not the same reference, but they may be equal.
#Entity
public class EntityA {
#OneToOne()
private EntityB entity1;
#OneToOne();
private EntityB entity2;
}
The JSF view lets the uses select entity1 and entity2 from a selection list. It also shows some details of theses EntityBs and the user is allowed to edit entity1 and entity2 seperately. Everything works fine, except the user has choses the same (equal) EntityB for entity1 and entity2. Then, only the references to these objects are updated. Of course entity1 and entity2 are two different JPA entites, and are not the same reference. But I want to distribute the changes to all detached instances of EntityB. I have this situation hundreds of times in my application, so I dont want to take care about, which objects have to be updated in which situations. I need some solutation the does it for me automatically. One Idea was to keep all objects I use in this session in special list and every time a request was submitted and processed iterate over this map and change alle equal objects. But his sounds very dirty. Maybe there is a build in JPA function to make all equal objects the same reference. I dont know if this is possible. Do you have a solution for this? Thanks.
I'm going to abstract your problem out a bit here: if a change to one object requires changing a number of other objects, consider putting the field that you're changing in a separate object, and have all those objects reference it.
For example, if you have:
class MyClass {
String info;
int id;
}
and two instances of MyClass with the same 'id' should both be updated when the 'info' field changes then use this:
class myClass {
myInfoClass info;
int id
}
class myInfoClass {
String value;
}
and give all instances of myClass that are equal the same instances of myInfoClass. Changing myClass.info.value will effectively change all instances of myClass, because they all hold the same instance of myInfoClass.
Sorry if I've got the syntax slightly wrong, I jump between languages a lot.
I use this technique in a game I wrote recently where a switch activates a door- both the switch and door have a Circuit object that holds a boolean powered field. The doors 'isOpen()' method simply returned circuit.powered, and when the switch is activated I just call switch.circuit.powered = true, and the door is automatically considered 'open'. Previously, I had it searching the game's map for all doors with the same circuit id, and changing the powered field on each.
this is classic form handling logic
if the user clicks the save button manipulate the data in the database
reload the data every time you create the web page
you should not cache the data in the web session
if you need caching, activate it in the persistence layer (ex. hibernate cache)

Categories

Resources