What I would like to realize is the following:
I have a dashboard class and a user class.
In my (Java EE project) Java code I would like to get all dashboards, to which the user has been subscribed.
The database contains a table (dashboard_users), with the following fields: idUser, idDashboard, isDefault en ID.
There should also be a Java POJO of the joined tabled.
My question:
How should the JPA M-M connection between these three classes look like (Dashboard.java/User.java/UserDashboard.java)?
I followed a lot of tutorials and examples, but for some reason there are always errors or other problems. It would be very welcome if someone could give an example, so I can see what I am doing wrong.
Thank you
Given the extra attribute on the association table you are going to need to model it (via a UserDashboard.java class as you asked). Quite unfortunate, as it adds a significant amount of work to your model layer.
If you find you do not need the extra attribute after all then I would model User with a set of Dashboards, linked directly via a #JoinTable.
One way you could do this would be to see the relationship between User and Dashboard as a map in which the Dashboard is a key, there being an entry for every Dashboard associated with the User, and the value is a flag indicating whether that Dashboard is the default for that User. I admit this is a bit forced; it's an odd way to look at the relationship, perhaps even suspect as has been charged.
But the advantage of this view is that it lets you map the living daylights out of everything like this:
#Entity
public class Dashboard {
#Id
private int id;
private String name;
public Dashboard(int id, String name) {
this.id = id;
this.name = name;
}
protected Dashboard() {}
}
#Entity
public class User {
#Id
private int id;
private String name;
#ElementCollection
private Map<Dashboard, Boolean> dashboards;
public User(int id, String name) {
this.id = id;
this.name = name;
this.dashboards = new HashMap<Dashboard, Boolean>();
}
protected User() {}
// disclaimer: the following 'business logic' is not necessarily of the finest quality
public Set<Dashboard> getDashboards() {
return dashboards.keySet();
}
public Dashboard getDefaultDashboard() {
for (Entry<Dashboard, Boolean> dashboard : dashboards.entrySet()) {
if (dashboard.getValue()) {
return dashboard.getKey();
}
}
return null;
}
public void addDashboard(Dashboard dashboard) {
dashboards.put(dashboard, false);
}
public void setDefaultDashboard(Dashboard newDefaultDashboard) {
Dashboard oldDefaultDashboard = getDefaultDashboard();
if (oldDefaultDashboard != null) {
dashboards.put(oldDefaultDashboard, false);
}
dashboards.put(newDefaultDashboard, true);
}
}
This maps a table structure which looks like this Hibernate-generated SQL, which i think is roughly what you want. The generated names on the User_dashboards table are pretty shoddy; you could customise them quite easily with some annotations or some XML. Personally, i like to keep all the filthy details of the actual mapping between the objects and the database in an orm.xml; here's what you'd need to add to use more sensible names:
<entity class="User">
<attributes>
<element-collection name="dashboards">
<map-key-join-column name="Dashboard_id" />
<column name="is_default" />
</element-collection>
</attributes>
</entity>
Related
So I have looked at various tutorials about JPA with Spring Data and this has been done different on many occasions and I am no quite sure what the correct approach is.
Assume there is the follwing entity:
package stackoverflowTest.dao;
import javax.persistence.*;
#Entity
#Table(name = "customers")
public class Customer {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name = "id")
private long id;
#Column(name = "name")
private String name;
public Customer(String name) {
this.name = name;
}
public Customer() {
}
public long getId() {
return id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
We also have a DTO which is retrieved in the service layer and then handed to the controller/client side.
package stackoverflowTest.dto;
public class CustomerDto {
private long id;
private String name;
public CustomerDto(long id, String name) {
this.id = id;
this.name = name;
}
public long getId() {
return id;
}
public void setId(long id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
So now assume the Customer wants to change his name in the webui - then there will be some controller action, where there will be the updated DTO with the old ID and the new name.
Now I have to save this updated DTO to the database.
Unluckily currently there is no way to update an existing customer (except than deleting the entry in the DB and creating a new Cusomter with a new auto-generated id)
However as this is not feasible (especially considering such an entity could have hundreds of relations potentially) - so there come 2 straight forward solutions to my mind:
make a setter for the id in the Customer class - and thus allow setting of the id and then save the Customer object via the corresponding repository.
or
add the id field to the constructor and whenever you want to update a customer you always create a new object with the old id, but the new values for the other fields (in this case only the name)
So my question is wether there is a general rule how to do this?
Any maybe what the drawbacks of the 2 methods I explained are?
Even better then #Tanjim Rahman answer you can using Spring Data JPA use the method T getOne(ID id)
Customer customerToUpdate = customerRepository.getOne(id);
customerToUpdate.setName(customerDto.getName);
customerRepository.save(customerToUpdate);
Is's better because getOne(ID id) gets you only a reference (proxy) object and does not fetch it from the DB. On this reference you can set what you want and on save() it will do just an SQL UPDATE statement like you expect it. In comparsion when you call find() like in #Tanjim Rahmans answer spring data JPA will do an SQL SELECT to physically fetch the entity from the DB, which you dont need, when you are just updating.
In Spring Data you simply define an update query if you have the ID
#Repository
public interface CustomerRepository extends JpaRepository<Customer , Long> {
#Query("update Customer c set c.name = :name WHERE c.id = :customerId")
void setCustomerName(#Param("customerId") Long id, #Param("name") String name);
}
Some solutions claim to use Spring data and do JPA oldschool (even in a manner with lost updates) instead.
Simple JPA update..
Customer customer = em.find(id, Customer.class); //Consider em as JPA EntityManager
customer.setName(customerDto.getName);
em.merge(customer);
This is more an object initialzation question more than a jpa question, both methods work and you can have both of them at the same time , usually if the data member value is ready before the instantiation you use the constructor parameters, if this value could be updated after the instantiation you should have a setter.
If you need to work with DTOs rather than entities directly then you should retrieve the existing Customer instance and map the updated fields from the DTO to that.
Customer entity = //load from DB
//map fields from DTO to entity
So now assume the Customer wants to change his name in the webui -
then there will be some controller action, where there will be the
updated DTO with the old ID and the new name.
Normally, you have the following workflow:
User requests his data from server and obtains them in UI;
User corrects his data and sends it back to server with already present ID;
On server you obtain DTO with updated data by user, find it in DB by ID (otherwise throw exception) and transform DTO -> Entity with all given data, foreign keys, etc...
Then you just merge it, or if using Spring Data invoke save(), which in turn will merge it (see this thread);
P.S. This operation will inevitably issue 2 queries: select and update. Again, 2 queries, even if you wanna update a single field. However, if you utilize Hibernate's proprietary #DynamicUpdate annotation on top of entity class, it will help you not to include into update statement all the fields, but only those that actually changed.
P.S. If you do not wanna pay for first select statement and prefer to use Spring Data's #Modifying query, be prepared to lose L2C cache region related to modifiable entity; even worse situation with native update queries (see this thread) and also of course be prepared to write those queries manually, test them and support them in the future.
I have encountered this issue!
Luckily, I determine 2 ways and understand some things but the rest is not clear.
Hope someone discuss or support if you know.
Use RepositoryExtendJPA.save(entity). Example:
List<Person> person = this.PersonRepository.findById(0)
person.setName("Neo");
This.PersonReository.save(person);
this block code updated new name for record which has id = 0;
Use #Transactional from javax or spring framework. Let put #Transactional upon your class or specified function, both are ok. I read at somewhere that this annotation do a "commit" action at the end your function flow. So, every things you modified at entity would be updated to database.
There is a method in JpaRepository
getOne
It is deprecated at the moment in favor of
getById
So correct approach would be
Customer customerToUpdate = customerRepository.getById(id);
customerToUpdate.setName(customerDto.getName);
customerRepository.save(customerToUpdate);
I have the following entities:
#Entity
public class Person {
#Id public Long id;
public String name;
public Ref<Picture> picture;
public String email;
public byte age;
public short birthday; // day of year
public String school;
public String very_long_life_story;
... some extra fields ...
}
#Entity
public class Place {
#Id public Long id;
public String name;
public String comment;
public long createdDateMS;
public long visitors;
#Load public List<Ref<Person>> owners;
}
Few notes:
(A) Maximum size of owners, in Place entity, is 4 (~)
(B) The person class is presumable very big, and when querying place, I would like to only show a subset of the person data. This optimizations is aimed both at server-client and server-database communications. Since objectify (gae actually) only load/save entire entities, I would like to do the following:
#Entity
pubilc class PersonShort {
#Id public Long id;
public Ref<Picture> picture;
public String name;
}
and inside Place, I would like to have (instead of owners):
#Load public List<PersonShort> owners;
(C) The problem with this approach, is that now I have a duplication inside the datastore.
Although this isn't such a bad thing, the real problem is when a Person will try to save a new picture, or change name; I will not only have to update it in his Person class,
but also search for every Place that has a PersonShort with same id, and update that.
(D) So the question is, is there any solution? Or am I simply forced to select between the options?
(1) Loading multiple Person class, which are big, when all I need is some really small information about it.
(2) Data duplication with many writes
If so, which one is better (Currently, I believe it's 1)?
EDIT
What about loading the entire class (1), but sending only part of it?
#Entity
public class Person {
#Id public Long id;
public String name;
public Ref<Picture> picture;
public String email;
public byte age;
public short birthday; // day of year
public String school;
public String very_long_life_story;
... some extra fields ...
}
public class PersonShort {
public long id;
public String name;
}
#Entity
public class Place {
#Id public Long id;
public String name;
public String comment;
public long createdDateMS;
public long visitors;
// Ignore saving to datastore
#Ignore
public List<PersonShort> owners;
// Do not serialize when sending to client
#ApiResourceProperty(ignored = AnnotationBoolean.TRUE)
#Load public List<Ref<Person>> ownersRef;
#OnLoad private void loadOwners() {
owners = new List<PersonShort>();
for (Ref<Person> r : ownersRef) {
owners.add(nwe PersonShort(r.get()));
}
}
}
It sounds like you are optimizing prematurely. Do you know you have a performance issue?
Unless you're talking about hundreds of K, don't worry about the size of your Person object in advance. There is no practical value in hiding a few extra fields unless the size is severe - and in that case, you should extract the big fields into some sort of meaningful entity (PersonPicture or whatnot).
No definite answer, but some suggestions to look at:
Lifecycle callbacks.
When you put your Person entity, you can have an #OnSave handler to automatically store your new PersonShort entity. This has the advantage of being transparent to the caller, but obviously you are still dealing with 2 entity writes instead of 1.
You may also find you are having to fetch two entities too; initially you may fetch the PersonShort and then later need some of the detail in the corresponding Person. Remember Objectify's caching can reduce your trips to Datastore: it's arguably better to have a bigger, cached, entity than two separate entities (meaning two RPCs).
Store your core properties (the ones in PersonShort) as separate properties in your Person class and then have the extended properties as a single JSON string which you can deserialize with Gson.
This has the advantage that you are not duplicating properties, but the disadvantage is that anything you want to be able to search on cannot be in the JSON blob.
Projection Queries. You can tell Datastore to return only certain properties from your entities. The problem with this method is that you can only return indexed properties, so you will probably find you need too many indexes for this to be viable.
Also, use #Load annotations with care. For example, in your Place class, think whether you really need all those owners' Person details when you fetch the owners. Perhaps you only need one of them? i.e., instead of getting a Place and 4 Persons every time you fetch a Place, maybe you are better off just loading the required Person(s) when you need them? (It will depend on your application.)
It is a good practice to return a different entity to your client than the one you get from your database. So you could create a ShortPerson or something that is only used as a return object in your REST endpoints. It will accept a Person in its constructor and fill in the properties you want to return to the client from this more complete object.
The advantage of this approach is actually less about optimization and more that your server models will change over time, but that change can be completely independent of your API. Also, you choose which data is publicly accessible, which is what you are trying to do.
As for the optimization between db and server, I wouldn't worry about it until it is an issue.
There are a lot of articles here and all over the web, but these all target different Objectify versions and seem not to work for one or the other reason.
I have an entity, which references another entity (e.g. an Account entity references a User entity):
#Cache
#Entity
public final class Account {
#Id Long id;
#Index private Ref<User> user;
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public User getUser() {
return user.get();
}
public void setUser(User user) {
this.user = Ref.create(user);
}
}
I am trying to do this:
From the client, GET the account entity over REST/Google Cloud Endpoints.
Modify the resource.
UPDATE it on the server.
As discussed here Objectify loads object behind Ref<?> even when #Load is not specified above code always returns the referenced user as well, whích I don't want.
One option would be, as #svpino suggested, "Make your #ApiMethod return a different Account object without the user property (thus avoiding fetching the user if you don't need it)." This works as long as I don't want to UPDATE the resource. If I need to UPDATE, the Key/Ref needs to be preserved (even though I don't need it on the client).
One possible approach that I can see would be using Key instead of Ref and rendering a web-safe string, then recreating the user during UPDATE.
private Key<User> user;
public String getUser() {
return user.toString();
}
public void setUser(String user) {
this.user = Key.create(user);
}
The string looks like "Key(User(5723348596162560))", but it seems not to be reconstituted (at least I get an exception here, haven't tracked it down yet).
Another approach would be writing an #ApiTransformer, which did not solve the problem either.
Jeff #StickFigure posted several times during the last years and the issue still seems not to be solved.
What's the current state with Objectify 5.0.2 and what's the recommendation for preserving the key between roundtrips, when the key is not needed on the client?
You need to annotate the property that you want to omit with #ApiResourceProperty(ignored = AnnotationBoolean.TRUE)
Google documentation says the following about the #ApiResourceProperty:
#ApiResourceProperty provides provides more control over how resource
properties are exposed in the API. You can use it on a property getter
or setter to omit the property from an API resource. You can also use
it on the field itself, if the field is private, to expose it in the
API. You can also use this annotation to change the name of a property
in an API resource.
I encourage you to read more by visiting this link
https://developers.google.com/appengine/docs/java/endpoints/annotations#apiresourceproperty
So in your case your class should look like this after the modification.
#Cache
#Entity
public final class Account
{
#Id Long id;
#Index private Ref<User> user;
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
#ApiResourceProperty(ignored = AnnotationBoolean.TRUE)
public User getUser() {
return user.get();
}
#ApiResourceProperty(ignored = AnnotationBoolean.TRUE)
public void setUser(User user) {
this.user = Ref.create(user);
}
}
The following code serializes an entity object to a web-safe string so it can be transferred over REST. When the entity is sent back to the server, the Ref<> is reconstituted. This way a server-side reference is not lost while the object does a round-trip to the client. This way referenced objects are not transferred to the client and back, but can be "worked" as Ref<> on the client.
#Index private Ref<User> user;
// for serialization
public String getUser() {
return user.getKey().getString(); // .toWebSafeString() will be added in future version of objectify and .toWebSafeString() will do the same as .getString()
}
public void setUser(String webSafeString) {
Key<User> key = Key.create(webSafeString);
this.user = Ref.create(key);
}
Two separate functions (not named well, I admit) are there for loading the actual object on the server and for creating the reference in the first place:
// for load and create reference
public User loadUser() {
return user.get();
}
public void referenceUser(User user) {
this.user = Ref.create(user);
}
I hope this solves the problem for everybody. This did not yet go through thorough testing, so comments are still welcome.
I have run a test to compare between using a Key<> and a Ref<> and to me it looks like even with Ref<> the entity is only reconstituted when loadEntity()/.get() is called. So Ref<> if probably better as #Load annotations will work. Maybe the objectify guys can confirm this.
You can create a class that extends Ref<User> and use an #ApiTransformer to transfer that class between backend and client
#ApiTransformer(UserRefTransformer.class)
public class UserRef extends LiveRef<User>
{
}
public class UserRefTransformer implements Transformer<UserRef, User>
{
// Your transformation code goes here
}
I am not sure what the best practice is for dealing with collection/lookup tables/in RequestFactory.
For example if I have following two Domain objects:
#Entity
public class Experiment {
private Long id;
private String name;
#ManyToOne(cascade={CascadeType.PERSIST,CascadeType.MERGE})
private UnitOfMeasure unitOfMeasure;
public Experiment() { }
public String getName() {
return name;
}
public Long getId() {
return id;
}
public void setName(String name) {
this.name = name;
}
public UnitOfMeasure getUnitOfMeasure() {
return unitOfMeasure;
}
public void setUnitOfMeasure(UnitOfMeasure unitOfMeasure) {
this.unitOfMeasure = unitOfMeasure;
}
}
#Entity
public class UnitOfMeasure {
private Long id;
private String unit_type;
public UnitOfMeasure() { }
public String getUnitType() {
return unit_type;
}
public Long getId() {
return id;
}
public void setUnitType(String unitType) {
this.unit_type = unitType;
}
}
This is a normal unidirectional 1:n realtionship between Experiment and UnitOfMeasure using a ForeignKey in the Experiment table.
I have a limited amount of different UnitOfMeasure instances which usually don't change.
The web-app provides a view where the user can change some properties of the Experiment instance. The view uses the Editor framework. For changing the UnitOfMeasure of a specific Experiment I use a ValueListBox and render the unit_type property.
Because the list of available UnitOfMeasure instances is static I use AutoBeanFactory to create a json string which I put into the HTML host page and during application start I parse it (same thing for all other collection like table values) and store them in a Singleton class instance (AppData) which I pass to `setAcceptableValues``.
Currently I derive UnitOfMeasureProxy from EntityProxy but in order to decode/encode it with AutoBeanFactory I have to annotate the Factory with EntityProxyCategory. I somehow suspect that a ValueProxy would be a better fit.
However with a ValueProxy when I change the UnitOfMeasure of a specific Experiment the entire ValueProxy instance is transmitted over the wire.
From a database point of view however only changing the value for the foreignkey in the Experiment table is required.
So what is the best practice (ValueProxy vs EntityProxy) for collection like tables and child values respectively?
In many cases, references to other entities are best modelled using their IDs rather than the EntityProxys themselves (it's debatable, but I think it's also true for server-side code, or actually any code that crosses unit-of-work boundaries –JPA EntityManager lifetime, Hibernate session, etc.–)
BTW, the proper way to serialize RequestFactory proxies is to use a ProxySerializer.
Make sure you use GWT 2.5.0-rc1 though if you have lists of ValueProxys (see issue 6961)
I am coding a ribbon/achievements system for a website and I have to write some logic for each ribbon in my system. For example, you could earn a ribbon if you're in the first 2,000 people registering to the website or after 1,000 post in the forum. The idea is very similar to stackoverflow's badges, really.
So, every ribbon is obviously in the database but they also need a bit of logic to determine when a user has earned the ribbon.
In the way I coded it, Ribbon is a simple interface:
public interface Ribbon {
public void setId(int id);
public int getId();
public String getTitle();
public void setTitle(String title);
public boolean isEarned(User user);
}
RibbonJpa is an abstract class that implements the Ribbon interface, avoiding the definition of the isEarned() method:
#Entity
#Table(name = "ribbon")
#Inheritance(strategy = InheritanceType.SINGLE_TABLE)
#DiscriminatorColumn(name = "ribbon_type")
public abstract class RibbonJpa implements Ribbon {
#Id
#Column(name = "id", nullable = false)
int id;
#Column(name = "title", nullable = false)
private String title;
#Override
public int getId() {
return id;
}
#Override
public void setId(int id) {
this.id= id;
}
#Override
public String getTitle() {
return title;
}
#Override
public void setTitle(String title) {
this.title = title;
}
}
You can see I define the inheritance strategy as SINGLE_TABLE (since I have to code like 50 ribbons and I don't need additional columns for any of them).
Now, a specific ribbon will be implemented like this:
#Entity
public class FirstUsersRibbon extends RibbonJpa implements Ribbon {
public FirstUsersRibbon() {
super.setId(1);
super.setTitle("First 2,000 users registered to the website");
}
#Override
public boolean isEarned(User user) {
// My logic to check whether the specified user has won the award
}
}
This code works fine, the tables are created in the database in the way I expect (I use DDL generation in my local environment).
The thing is, it feels wrong to code business logic in a domain object. Is it good practice? Can you suggest a better solution? Also, I'm not able to Autowire any DAOs in the entity (FirstUsersRibbon) and I need them in the business logic (in this case, I need a DAO to check whether the user is in the first 2,000 users registered to the website).
Any help very appreciated.
Thank you!
The thing is, it feels wrong to code business logic in a domain object.
Many would say the reverse was true: that it is an anti-pattern (an anaemic domain model) to have business logic anywhere else. See Domain-Driven Design for more information.
You might then wonder what the middle tier of the conventional 3-tier architecture was for. It provides a service layer for the application. See my related question "What use are EJBs?".
Also, I'm not able to Autowire any DAOs in the entity
If you're using Spring and Hibernate, have a look at http://jblewitt.com/blog/?p=129: this gives a good description of a similar problem with a variety of solutions.
If you're looking for a rich domain model in the way that you describe, then it can be a good idea to instantiate domain objects via Spring, and hence be able to inject DAOs into your domain objects.