Best way to prevent unique constraint violations with JPA - java

I have an Keyword and a KeywordType as entities. There are lots of keywords of few types. When trying to persist the second keyword of a type, the unique constraint is violated and the transaction is rolled back. Searching SO i found several possibilies (some of them from different contexts, so I'm not sure of their validity here) - this post and this post advise catching the Exception which would be of no use to me as I end up where I started and still need to somehow persist the keyword.
Same applies to locking proposed for a different situaltion here Custom insert statements as proposed in this and this posts wouldn't work proper I guess, since I'm using Oracle and not MySQL and woulnd like to tie the implementation to Hibernate. A different workaround would be trying to retrieve the type first in the code generating the keywords, and set it on the keyword if found or create a new one if not.
So, what would be the best - most robust, portable (for different databases and persistence providers) and sane approach here?
Thank you.
The involved entities:
public class Keyword {
#Id
#GeneratedValue
private long id;
#Column(name = "VALUE")
private String value;
#ManyToOne
#JoinColumn(name = "TYPE_ID")
private KeywordType type;
...
}
and
#Entity
#Table(uniqueConstraints = {#UniqueConstraint(columnNames = { "TYPE" }) })
public class KeywordType {
#Id
#GeneratedValue
private long id;
#Column(name = "TYPE")
private String type;
...
}

Your last solution is the right one, IMO. Search for the keyword type, and if not found, create it.
Catching the exception is not a good option because
it's hard to know which exception to catch and make your code portable across JPA and DB engines
The JPA engine will be in an undetermined state after such an exception, and you should always rollback in this case.
Note however that with this technique, you might still have two transactions searching for the same type in parallel, and then try to insert it in parallel. One of the transaction will rollback, but it will be much less frequent.

If you're using EJB 3.1 and you don't mind serializing this operation, a singleton bean using container managed concurrency can solve the problem.
#Singleton
#ConcurrencyManagement(ConcurrencyManagementType.CONTAINER)
public class KeywordTypeManager
{
#Lock(LockType.WRITE)
public void upsert(KeywordType keywordType)
{
// Only one thread can execute this at a time.
// Your implementation here:
// ...
}
#Inject
private KeywordTypeDao keywordTypeDao;
}

I would go for this option:
A different workaround would be trying
to retrieve the type first in the code
generating the keywords, and set it on
the keyword if found or create a new
one if not.

Related

Storing same class objects in both Neo4j and MongoDB ssing Spring Data

The real issue is that both tools use #Id types while I can not create that kind of an object stored in both.
class DataBaseClass {
#GraphId #Field("graphId")
Long graphId;
#Indexed(indexName = "mongodb", indexType = IndexType.FULLTEXT)
#Id
String _id; //mongodb id
}
Neo4j #GraphId extend #Id so there is no difference between the two,
at first I thought changing graphId type from Long to String will will do the trick but I get casting exception error.
I also tried to use mongodb #Field annotation with no differ resoults.
In my humble opinion this prevention is bug in the Spring Data framework, that could be spared from Neo4j if the data layer could accept String id with internal casting.
I think I got the issue covered but I would like to check if there is a way to do this after all.

How to tell Hibernate to conditionally ignore columns in CRUD operations

Is it possible to somehow tell Hibernate to conditionally ignore a missing column in a database table while doing the CRUD operations?
I've got a Java application using Hibernate as persistence layer. I'd like to be able to somehow tell Hibernate: If database version < 50, then ignore this column annotation (or set it transient).
This situation arises due to different database versions at different clients, but same entity code for all sites. For example, I've got a class, where the column description2 might miss in some databases.
#Entity
#Table(name = "MY_TABLE")
public class MyTable implements java.io.Serializable {
private Integer serialNo;
private String pickCode;
private String description1;
private String description2;
#Id
#Column(name = "Serial_No", nullable = false)
#GenericGenerator(name = "generator", strategy = "increment")
#GeneratedValue(generator = "generator")
public Integer getSerialNo() {
return this.serialNo;
}
#Column(name = "Pick_Code", length = 25)
public String getPickCode() {
return this.pickCode;
}
#Column(name = "Description1")
public String getDescription1() {
return this.description1;
}
#Column(name = "Description2") // <- this column might miss in some databases
//#TransientIf(...) <- something like this would be nice, or any other solution
public String getDescription2() {
return this.description2;
}
}
Background: I have a large application with a lot of customizations for different clients. Now it happens from time to time that one client (out of lets say 500) gets a new feature that requires a database structure update (e.g. a new field in a table). I release a new version for him, he runs a database schema update and can use the new feature. But all other clients won't do an incremental database update each time when any user gets a new feature. They just want to use the latest version, but are affected by the new feature (for that one client) they will never use.
I think it is only possible if you separate the mapping definition from the entities so that you can replace it. Thus you can not use annotation based mapping.
Instead I would suggest to use xml based mapping and create different xml mapping files for each client. Since you have about 500 clients you might want to create groups of clients who all share the same mapping file.
At least I think it will be very hard to maintain the different clients needs with one entity model and it will lead to a complex code structure. E.g. if you add properties to the enties that can be null for some clients than you will also add a lot more null checks to your code. One null check for each client specific property.

Persisting third-party classes with no ID's

Say I have the following Java class, which is owned by a vendor so I can't change it:
public class Entry {
private String user;
private String city;
// ...
// About 10 other fields
// ...
// Getters, setters, etc.
}
I would like to persist it to a table, using JPA 2.0 (OpenJPA implementation). I cannot annotate this class (as it is not mine), so I'm using orm.xml to do that.
I'm creating a table containing a column per field, plus another column called ID. Then, I'm creating a sequence for it.
My question is: is it at all possible to tell JPA that the ID that I would like to use for this entity doesn't even exist as a member attribute in the Entry class? How do I go about creating a JPA entity that will allow me to persist instances of this class?
EDIT
I am aware of the strategy of extending the class and adding an ID property it. However, I'm looking for a solution that doesn't involve extending this class, because I need this solution to also be applicable for the case when it's not only one class that I have to persist, but a collection of interlinked classes - none of which has any ID property. In such a scenario, extending doesn't work out.
Eventually, I ended up doing the following:
public class EntryWrapper {
#Id
private long id;
#Embedded
private Entry entry;
}
So, I am indeed wrapping the entity but differently from the way that had been suggested. As the Entry class is vendor-provided, I did all its ORM work in an orm.xml file. When persisting, I persist EntryWrapper.
I don't have much experience with JPA, but I wouldn't extend your base classes, instead I would wrap them:
public class PersistMe<T> {
#Id
private long id;
private T objToWrap;
public(T objToWrap) {
this.objToWrap = objToWrap;
}
}
I can't test it, if it doesn't work let me know so I can delete the answer.

Avoid having JPA to automatically persist objects

Is there any way to avoid having JPA to automatically persist objects?
I need to use a third party API and I have to pull/push from data from/to it. I've got a class responsible to interface the API and I have a method like this:
public User pullUser(int userId) {
Map<String,String> userData = getUserDataFromApi(userId);
return new UserJpa(userId, userData.get("name"));
}
Where the UserJpa class looks like:
#Entity
#Table
public class UserJpa implements User
{
#Id
#Column(name = "id", nullable = false)
private int id;
#Column(name = "name", nullable = false, length = 20)
private String name;
public UserJpa() {
}
public UserJpa(int id, String name) {
this.id = id;
this.name = name;
}
}
When I call the method (e.g. pullUser(1)), the returned user is automatically stored in the database. I don't want this to happen, is there a solution to avoid it? I know a solution could be to create a new class implementing User and return an instance of this class in the pullUser() method, is this a good practice?
Thank you.
Newly create instance of UserJpa is not persisted in pullUser. I assume also that there is not some odd implementation in getUserDataFromApi actually persisting something for same id.
In your case entity manager knows nothing about new instance of UserJPA. Generally entities are persisted via merge/persist calls or as a result of cascaded merge/persist operation. Check for these elsewhere in code base.
The only way in which a new entity gets persisted in JPA is by explicitly calling the EntityManager's persist() or merge() methods. Look in your code for calls to either one of them, that's the point where the persist operation is occurring, and refactor the code to perform the persistence elsewhere.
Generally JPA Objects are managed objects, these objects reflect their changes into the database when the transaction completes and before on a first level cache, obviously these objects need to become managed on the first place.
I really think that a best practice is to use a DTO object to handle the data transfering and then use the entity just for persistence purposes, that way it would be more cohesive and lower coupling, this is no objects with their nose where it shouldnt.
Hope it helps.

Lazily loading a clob in hibernate

There's a lot one can find about this googling a bit but I haven't quite found a workable solution to this problem.
Basically what I have is a big CLOB on a particular class that I want to have loaded on demand. The naive way to do this would be:
class MyType {
// ...
#Basic(fetch=FetchType.LAZY)
#Lob
public String getBlob() {
return blob;
}
}
That doesn't work though, apparently due to the fact I'm using oracle drivers, i.e. Lob objects aren't treated as simple handles but are always loaded. Or so I've been led to believe from my forays. There is one solution that uses special instrumentation for lazy property loading, but as the Hibernate docs seem to suggest they're less than interested in making that work correctly, so I'd rather not go that route. Especially with having to run an extra compile pass and all.
So the next solution I had envisioned was separating out this object to another type and defining an association. Unfortunately, while the docs give conflicting information, it's apparent to me that lazy loading doesn't work on OneToOne associations with shared primary key. I'd set one side of the association as ManyToOne, but I'm not quite sure how to do this when there's a shared primary key.
So can anybody suggest the best way to go about this?
According to this only PostgreSQL implements Blob as really lazy. So the best solution is to move the blob to another table. Do you have to use a shared primary key? Why don't you do something like this:
public class MyBlobWrapper {
#Id
public Long getId() {
return id;
}
#Lob
public String getBlob() {
return blob;
}
#OneToOne(fetch=FetchType.LAZY,optional=false)
public MyClass getParent() {
return parent;
}
}
Instead of doing equilibristics with hibernate annotations, one may just try converting the field from String into Clob (or Blob):
#Lob
#Basic(fetch=FetchType.LAZY)
#Column(name = "FIELD_COLUMN")
public Clob getFieldClob() {
return fieldClob;
}
public void setFieldClob(Clob fieldClob) {
this.fieldClob = fieldClob;
}
#Transient
public String getField()
{
if (this.getFieldClob()==null){
return null;
}
try {
return MyOwnUtils.readStream(this.getFieldClob().getCharacterStream());
} catch (Exception e) {
e.printStackTrace();
}
return null;
}
public void setField(String field)
{
this.fieldClob = Hibernate.createClob(field);
}
Worked for me (the field started to load lazily, on Oracle).
Since you appear to be using Hibernate I wonder if your problem is related to the following Hibernate feature:
Using Lazy Properties Fetching
Hibernate3 supports the lazy fetching of individual properties. This
optimization technique is also known as fetch groups. Please note that
this is mostly a marketing feature; optimizing row reads is much more
important than optimization of column reads. However, only loading
some properties of a class could be useful in extreme cases. For
example, when legacy tables have hundreds of columns and the data
model cannot be improved.
Lazy property loading requires buildtime bytecode instrumentation. If
your persistent classes are not enhanced, Hibernate will ignore lazy
property settings and return to immediate fetching.
See Bytecode Instrumentation for Hibernate Using Maven.
Old post, but only one that helped me, thanks to #TadeuszKopec answer.
Looks like it is hard to do lazy loading of blob with JPA. I tried #OneToOne association, but it complicates more than help.
I just moved the bytes to another class, with no association with MyClass (parent. Same table, same id):
#Entity
#Table(name="MyTable")
public class MyBlobWrapper{
#Id
#Column(name = "id") // id of MyTable, same as MyClass
private Long id;
#Lob
private byte[] bytes;
}
#Entity
#Table(name="MyTable")
public class MyClass{
#Id
#Column(name = "id")
private Long id;
// other fields .....
}
Just remember to flush parent, before saving the blob:
em.persist(parent);
em.flush();
em.merge(new MyBlobWrapper(parent_id,new byte[1000]));
Now I can load the pdf alone:
String query1 = " select PDF from MyBlobWrapper PDF where PDF.id = :id";
I am just beginner with JPA, hope that helps.

Categories

Resources