I'm having 3 JPA Entities like this as well as the corresponding JPA-Repositories.
#Entity
public class ChairEntity {
...
#OneToMany(cascade = CascadeType.ALL, orphanRemoval = true)
#JoinTable(name = "chair_image")
private Set<ImageEntity> images = new HashSet<>();
...
}
#Entity
public class TableEntity {
...
#OneToMany(cascade = CascadeType.ALL, orphanRemoval = true)
#JoinTable(name = "table_image")
private Set<ImageEntity> images = new HashSet<>();
...
}
#Entity
public class ImageEntity{
...
private String description;
#Lob
private byte[] data;
...
}
Using a REST-API these Objects are created and updated. This works usually fine, e.g. i may add multiple imageEntities at once like this (all codes blocks are inside their own transaction)
chairEntity.getImages().add(new ImageEntity(..));
chairEntity.getImages().add(new ImageEntity(..));
chairRepository.save(chairEntity);
...or update multiple ImageEntities of the same chairEntity at once.
chairEntity.getImages().stream().forEach(imageEntity -> {
imageEntity.setDescription("some other description");
}
chairRepository.save(chairEntity);
In both cases all Changes are successfully cascaded and saved.
If, however, I am updating an existing ImageEntity as well as adding another entity, it fails:
chairEntity.getImages().stream().forEach(imageEntity -> {
imageEntity.setDescription("some other description");
}
chairEntity.getImages().add(new ImageEntity(...));
chairRepository.save(chairEntity); // crashes
The exception is as followed (an equivalent error is thrown using h2db):
org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "chair_image_pkey"
When inspecting the DB-Log, it seems like Hibernate is trying to:
inserting the new image (successfully)
updating the existing image (successfully)
inserting an entry into the Join-Table/Collection-Table (chair_image) referencing the chair and the existing image. This then throws this JdbcSQLIntegrityConstraintViolationException, as this combination of Foreign keys already exists (the old image already existed before).
Why is this happening and how do i solve it? Saving and Flushing the Changes individually inside the same transaction doesn't seem to work either.
A workaround, in case anyone else comes across this problem: Reverse the order of operations:
chairEntity.getImages().add(new ImageEntity(...));
chairRepository.saveAndFlush(chairEntity);
chairEntity.getImages().stream().forEach(imageEntity -> {
imageEntity.setDescription("some other description");
}
chairRepository.save(chairEntity); // crashes
The order in which hibernate executes the SQL-Statements stays the same, but due to the flush between, the faulty insert into the Join-Table no longer happens.
Related
I'm currently learning Spring-Boot and Spring-Data-JPA.
I'm using a postgresql database for storing the data.
My goal is to store ingredients with a unique and custom ID (you just type it in when creating it), but when another ingredient with the same ID gets inserted, there should be some kind of error. In my understanding, this is what happens when I use the #Id annotation, hibernate also logs the correct create table statement.
This is my Ingredient class:
public class Ingredient {
#Id
#Column(name = "ingredient_id")
private String ingredient_id;
#Column(name = "name")
private String name;
#Column(name = "curr_stock")
private double curr_stock;
#Column(name = "opt_stock")
private double opt_stock;
#Column(name = "unit")
private String unit;
#Column(name = "price_per_unit")
private double price_per_unit;
#Column(name = "supplier")
private String supplier;
-- ... getters, setters, constructors (they work fine, I can insert and get the data)
}
My controller looks like this:
#RestController
#RequestMapping(path = "api/v1/ingredient")
public class IngredientController {
private final IngredientService ingredientService;
#Autowired
public IngredientController(IngredientService ingredientService) {
this.ingredientService = ingredientService;
}
#GetMapping
public List<Ingredient> getIngredients(){
return ingredientService.getIngredients();
}
#PostMapping
public void registerNewStudent(#RequestBody Ingredient ingredient) {
ingredientService.saveIngredient(ingredient);
}
}
And my service class just uses the save() method from the JpaRepository to store new ingredients.
To this point I had the feeling, that I understood the whole thing, but when sending two post-requests to my application, each one containing an ingredient with the id "1234", and then showing all ingredients with a get request, the first ingredient just got replaced by the second one and there was no error or smth. like that in between.
Sending direct sql insert statements to the database with the same values throws an error, because the primary key constraint gets violated, just as it should be. Exactly this should have happened after the second post request (in my understanding).
What did I get wrong?
Update:
From the terminal output and the answers I got below, it is now clear, that the save() method can be understood as "insert or update if primary key is already existing".
But is there a better way around this than just error-handle every time when saving a new entry by hand?
The save method will create or update the entry if the id already exists. I'd switch to auto generating the ID when inserting, instead of manually creating the IDs. That would prevent the issue you have
When saving a new ingredient, jpa will perform an update if the value contained in the “id” field is already in the table.
A nice way through which you can achieve what you want is
ingredientRepository.findById(ingredientDTO.getIngredientId()).
ifPresentOrElse( ingredientEntity-> ResponseEntity.badRequest().build(), () -> ingredientRepository.save(ingredientDTO));
You can return an error if the entity is already in the table otherwise (empty lambda), you can save the new row
This is a downside to using CrudRepository save() on an entity where the id is set by the application.
Under the hood EntityManager.persist() will only be called if the id is null otherwise EntityManager.merge() is called.
Using the EntityManager directly gives you more fine grained control and you can call the persist method in your application when required
First, here are my entities.
Player :
#Entity
#JsonIdentityInfo(generator=ObjectIdGenerators.UUIDGenerator.class,
property="id")
public class Player {
// other fields
#ManyToOne
#JoinColumn(name = "pla_fk_n_teamId")
private Team team;
// methods
}
Team :
#Entity
#JsonIdentityInfo(generator=ObjectIdGenerators.UUIDGenerator.class,
property="id")
public class Team {
// other fields
#OneToMany(mappedBy = "team")
private List<Player> members;
// methods
}
As many topics already stated, you can avoid the StackOverflowExeption in your WebService in many ways with Jackson.
That's cool and all but JPA still constructs an entity with infinite recursion to another entity before the serialization. This is just ugly ans the request takes much longer. Check this screenshot : IntelliJ debugger
Is there a way to fix it ? Knowing that I want different results depending on the endpoint. Examples :
endpoint /teams/{id} => Team={id..., members=[Player={id..., team=null}]}
endpoint /members/{id} => Player={id..., team={id..., members=null}}
Thank you!
EDIT : maybe the question isn't very clear giving the answers I get so I'll try to be more precise.
I know that it is possible to prevent the infinite recursion either with Jackson (#JSONIgnore, #JsonManagedReference/#JSONBackReference etc.) or by doing some mapping into DTO. The problem I still see is this : both of the above are post-query processing. The object that Spring JPA returns will still be (for example) a Team, containing a list of players, containing a team, containing a list of players, etc. etc.
I would like to know if there is a way to tell JPA or the repository (or anything) to not bind entities within entities over and over again?
Here is how I handle this problem in my projects.
I used the concept of data transfer objects, implemented in two version: a full object and a light object.
I define a object containing the referenced entities as List as Dto (data transfer object that only holds serializable values) and I define a object without the referenced entities as Info.
A Info object only hold information about the very entity itself and not about relations.
Now when I deliver a Dto object over a REST API, I simply put Info objects for the references.
Let's assume I deliever a PlayerDto over GET /players/1:
public class PlayerDto{
private String playerName;
private String playercountry;
private TeamInfo;
}
Whereas the TeamInfo object looks like
public class TeamInfo {
private String teamName;
private String teamColor;
}
compared to a TeamDto
public class TeamDto{
private String teamName;
private String teamColor;
private List<PlayerInfo> players;
}
This avoids an endless serialization and also makes a logical end for your rest resources as other wise you should be able to GET /player/1/team/player/1/team
Additionally, the concept clearly separates the data layer from the client layer (in this case the REST API), as you don't pass the actually entity object to the interface. For this, you convert the actual entity inside your service layer to a Dto or Info. I use http://modelmapper.org/ for this, as it's super easy (one short method call).
Also I fetch all referenced entities lazily. My service method which gets the entity and converts it to the Dto there for runs inside of a transaction scope, which is good practice anyway.
Lazy fetching
To tell JPA to fetch a entity lazily, simply modify your relationship annotation by defining the fetch type. The default value for this is fetch = FetchType.EAGER which in your situation is problematic. That is why you should change it to fetch = FetchType.LAZY
public class TeamEntity {
#OneToMany(mappedBy = "team",fetch = FetchType.LAZY)
private List<PlayerEntity> members;
}
Likewise the Player
public class PlayerEntity {
#ManyToOne(fetch = FetchType.LAZY)
#JoinColumn(name = "pla_fk_n_teamId")
private TeamEntity team;
}
When calling your repository method from your service layer, it is important, that this is happening within a #Transactional scope, otherwise, you won't be able to get the lazily referenced entity. Which would look like this:
#Transactional(readOnly = true)
public TeamDto getTeamByName(String teamName){
TeamEntity entity= teamRepository.getTeamByName(teamName);
return modelMapper.map(entity,TeamDto.class);
}
In my case I realized I did not need a bidirectional (One To Many-Many To One) relationship.
This fixed my issue:
// Team Class:
#OneToMany(fetch = FetchType.LAZY, cascade = CascadeType.ALL)
private Set<Player> members = new HashSet<Player>();
// Player Class - These three lines removed:
// #ManyToOne
// #JoinColumn(name = "pla_fk_n_teamId")
// private Team team;
Project Lombok might also produce this issue. Try adding #ToString and #EqualsAndHashCode if you are using Lombok.
#Data
#Entity
#EqualsAndHashCode(exclude = { "members"}) // This,
#ToString(exclude = { "members"}) // and this
public class Team implements Serializable {
// ...
This is a nice guide on infinite recursion annotations https://www.baeldung.com/jackson-bidirectional-relationships-and-infinite-recursion
You can use #JsonIgnoreProperties annotation to avoid infinite loop, like this:
#JsonIgnoreProperties("members")
private Team team;
or like this:
#JsonIgnoreProperties("team")
private List<Player> members;
or both.
I have a form to fill a POJO called Father. Inside it, I have a FotoFather field.
When I save a new Father, I save automatically the object FotoFather (with Hibernate ORM pattern).
FotoFather.fotoNaturalUrl must be filled with the value of Father.id and here is the problem!
When i'm saving Father on the db, of course I still haven't Father.id value to fill FotoFather.fotoNaturalUrl. How can I solve this problem?
Thank you
#Entity
#Table(name = "father")
public class Father implements Serializable{
...
#Id
#Column(name = "id")
#GeneratedValue(strategy = GenerationType.AUTO)
private int id;
...
#OneToOne(targetEntity = FotoFather.class, fetch = FetchType.EAGER)
#JoinColumn(name = "fotoFather", referencedColumnName = "id")
#Cascade(CascadeType.ALL)
private FotoFather fotoFather;
}
FotoFather.class
#Entity
#Table(name = "foto_father")
public class FotoFather.class{
#Id
#Column(name = "id")
#GeneratedValue(strategy = GenerationType.AUTO)
private int id;
...
#Column(name = "foto_natural_url")
private String fotoNaturalUrl;
...
}
If you simply need the complete URL for some application-specific purpose, I would likely err on the side of not trying to store the URL with the ID at all and instead rely on a transient method.
public class FotoFather {
#Transient
public String getNaturalUrl() {
if(fotoNaturalUrl != null && fotoNaturalUrl.trim().length > 0) {
return String.format("%s?id=%d", fotoNaturalUrl, id);
}
return "";
}
}
In fact, decomposing your URLs even more into their minimalist variable components and only storing those in separate columns can go along way in technical debt, particularly if the URL changes. This way the base URL could be application-configurable and the variable aspects that control the final URL endpoint are all you store.
But if you must know the ID ahead of time (or as in a recent case of mine, keep identifiers sequential without loosing a single value), you need to approach this where FotoFather identifiers are generated prior to persisting the entity, thus they are not #GeneratedValues.
In order to avoid issues with collisions at insertion, we have a sequence service class that exposes support for fetching the next sequence value by name. The sequence table row is locked at read and updated at commit time. This prevents multiple sessions from concurrency issues with the same sequence, prevents gaps in the range and allows for knowing identifiers ahead of time.
#Transactional
public void save(Father father) {
Assert.isNotNull(father, "Father cannot be null.");
Assert.isNotNull(father.getFotoFather(), "FotoFather cannot be null.");
if(father.getFotoFather().getId() == null) {
// joins existing transaction or errors if one doesn't exist
// when sequenceService is invoked.
Long id = sequenceService.getNextSequence("FOTOFATHER");
// updates the fotofather's id
father.getFotoFather().setId(id);
}
// save.
fatherRepository.save(father);
}
I think you can do be registering an #PostPersist callback on your Father class. As the JPA spec notes:
The PostPersist and PostRemove callback methods are invoked for an
entity after the entity has been made persistent or removed. These
callbacks will also be invoked on all entities to which these
operations are cascaded. The PostPersist and PostRemove methods will
be invoked after the database insert and delete operations
respectively. These database operations may occur directly after the
persist, merge, or remove operations have been invoked or they may
occur directly after a flush operation has occurred (which may be at
the end of the transaction). Generated primary key values are
available in the PostPersist method.
So, the callback should be called immediately after the Father instance is written to the database and before the FotoFather instance is written.
public class Father(){
#PostPersist
public void updateFotoFather(){
fotofather.setNaturalUrl("/xyz/" + id);
}
}
Consider the following:
#Entity
public class MainEntity {
#OneToOne(orphanRemoval = true, cascade = CascadeType.ALL)
private ChildEntity childEntity;
}
#Entity
public class ChildEntity {
#OneToMany(cascade = CascadeType.ALL)
#LazyCollection(FALSE)
private List<AnotherEntity> otherEntities;
}
Now, when i first call
final ChildEntity anewChild = new ChildEntity();
anewChild.addOtherEntity(anotherEntity); //Several Entities can be added here
mainEntity.setChildEntity(anewChild);
EntityManager.persist(mainEntity);
Everything works fine, and then i do some updates, long after the transaction is finished.
final ChildEntity anotherNewChild = new ChildEntity();
anotherNewChild.addOtherEntity(anotherEntity); //Several Entities can be added here
mainEntity.setChildEntity(anotherNewChild);
//A log of LOG.info(mainEntity); shows all fields appropriately set
//At some point during merge operation, the new ChildEntity will need to be persisted.
//According to my logs, an invocation of EntityManager.persist(anotherNewChild) occurs, during as the merge is propagated to the new entity.
//At this point is where the ChildEntity.otherEntities is detected to be null
return EntityManager.merge(mainEntity);
The problem is that, with persist, the
List<AnotherEntity>
is not null and not empty, while on merge, the
List<AnotherEntity>
is null
I am doing this over ejb remote invocation.
Hibernate 4.3.6
wildfly 8.1.0
jpa 2.1
Is there something i am missing here?
Reproduced issue with the following code:
https://github.com/marembo2008/hibernate-jpa-bug
Opened an issue on Hibernate Issue tracker.
https://hibernate.atlassian.net/browse/HHH-9751
The problem is that merge returns a new entity, so you should do something like this:
mainEntity = EntityManager.merge(mainEntity);
I have 2 objects joined together defined as such:
public class A {
...
#Id
#Column(name = "A_ID")
#SequenceGenerator(...)
#GeneratedValue(...)
public Long getA_ID();
#OneToOne(mappedBy = "a", fetch = FetchType.LAZY, cascade = CascadeType.ALL, targetEntity = B.class)
public B getB();
...
}
#VirtualAccessMethods(get = "getMethod", set = "setMethod")
public class B {
...
#Id
public Long getA_ID();
#MapsId
#OneToOne(fetch = FetchType.LAZY, cascade = CascadeType.ALL ,targetEntity = A.class)
#JoinColumn(name="A_ID")
public A getA();
getMethod(String name);
setMethod(String name, Object value);
...
}
When I go to em.merge(A) with B joined onto A for an INSERT, everything works fine. However if I do the same thing for an update, it will update only A. The update logic is like so:
#Transactional
public void update(Object fieldOnANewValue, Object fieldOnBNewField) {
A objA = em.executeQuery(...) //loads objA by primary key
objA.setFieldOnA(fieldOnANewValue);
B objB = objA.getB(); //lazy loads objB
objB.setMethod("FieldOnB", fieldOnBNewValue);
}
If I look at the logs, there is a SQL UPDATE statement committing the changes I made to A, but nothing for B. If I manually call em.merge(objB) the same issue exists. Does anyone know exactly what EclipseLink does to determine whether or not to generate an UPDATE statement? Particularly with regard to #VirtualAccessMethods? However, I have had the #OneToOne mappings setup differently before and em.merge(objB) worked fine then, plus INSERT works, so I'm not sure if that's the issue. On the flip side, if I have another object that is also joined onto A, but just is a normal POJO like A is, the UPDATE statement is generated for that. Caching is turned off, and I've verified that the objects are updated correctly before merge is called.
Please show the complete code and mappings.
Given you are using virtual access (are you using this correctly?), it could be some sort of change tracking issue related to the virtual access. Does the issue occur without using virtual access?
Try setting,
#ChangeTracking(ChangeTrackingType.DEFERRED)
to see if this has an affect.
You could also try,
#InstantiationCopyPolicy