I'm experimenting with JPA and Glassfish 4.0.
I've written a user class like this (just relevant parts and i'm not sure if it compiles):
public class User implements Serializable {
private static final long serialVersionUID = 1L;
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
#Basic(optional = false)
#Column(name = "id")
private Integer id;
#Basic(optional = false)
#NotNull
#Size(min = 1, max = 50)
#Column(name = "first_name")
private String firstName;
#JoinColumn(name = "country_id", referencedColumnName = "id")
#ManyToOne(optional = false)
private Country country;
public void setCountry(Country countryId) {
this.country = countryId;
}
}
My TestController (just relevant parts):
#ManagedBean(name = "testController", eager = true)
#RequestScoped
public class TestController implements Serializable {
#EJB
private dk.iqtools.session.UserFacade userFacade;
public String Insert(){
factory = Persistence.createEntityManagerFactory(PERSISTENCE_UNIT_NAME);
EntityManager em = factory.createEntityManager();
Query cq = em.createQuery("select c from Country c where c.id = 302");
List<Country> countryList = cq.getResultList();
User user = new User();
user.setFirstName("Hans123");
user.setLastName("Knudsen333");
user.setCountry((Country)countryList.get(0)); <- throws an ERROR
user.setPassword("secret");
user.setYearOfBirth(1966);
user.setGender(1);
user.setEmail("haps#hfhfh.dk2243");
userFacade.create(user);
return "";
}
And my Country bean is just a plain bean with simple attibutes located in:
dk.iqtools.entity
In general it works, but if i encounter an error in my code i persistently receive the following error:
Caused by: java.lang.ClassCastException:
dk.iqtools.entity.Country cannot be cast to dk.iqtools.entity.Country
at dk.iqtools.controller.TestController.Insert(TestController.java:65)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
the offending statement is this:
user.setCountry((Country)countryList.get(0));
Can anybody tell my why it happens? If everything runs as expected the user is inserted into the database. But if i for instanse tries to insert a user that already exists i receive a database error.
Next time through i receive the weird exception. I can't understand why a class can't be cast to itself.
I have to restart my GF instance to get rid of it.
Not very production-like.
Thanks for any input.
It happens because of some left overs from the EntityManagerFactory from the old version of your application, somehow its classloaders survived after you redeployed you app.
What you need to do is to close the EntityManagerFactory just before the redeployment occur, you can use ServletContextListeners to achieve that, they allow you to attach events to your web app initialization and destruction.
Here is a very simple example of a ServletContextListener implementation: http://4dev.tech/2015/08/example-of-servletcontextlistener-implementation/
I have just had to deal with this same type of Exception; "Can't cast X to X." Where "X" is the same class.
It turned out to be 2 problems feeding into each other. I had made a change in some of my code's pom.xml file (maven's makefile) that said my class was "to be included" (compiled instead of provided) in the WAR file. But it was also available in the "outer" EAR module.
The class loader was dealing with 2 different instances of the same jar file (i.e., 2 copies of it). This made the class loader think it was 2 different classes, causing the Exception.
To ultimately fix it I had to change compiled to provided in the WAR file pom.xml, but make sure the EAR file's pom did include the projects jar files.
But then I also had to take the step to make sure that I had run "clean" to delete all the WAR and EAR file's jar library contents so that there were no extra copies to be found.
It can get confusing to sort out the contents of the files since the WAR was in the EAR and you had to un-zip the WAR to get a listing of its contents and running the build multiple times made for differing dates on the files.
Related
I'm currently learning Spring-Boot and Spring-Data-JPA.
I'm using a postgresql database for storing the data.
My goal is to store ingredients with a unique and custom ID (you just type it in when creating it), but when another ingredient with the same ID gets inserted, there should be some kind of error. In my understanding, this is what happens when I use the #Id annotation, hibernate also logs the correct create table statement.
This is my Ingredient class:
public class Ingredient {
#Id
#Column(name = "ingredient_id")
private String ingredient_id;
#Column(name = "name")
private String name;
#Column(name = "curr_stock")
private double curr_stock;
#Column(name = "opt_stock")
private double opt_stock;
#Column(name = "unit")
private String unit;
#Column(name = "price_per_unit")
private double price_per_unit;
#Column(name = "supplier")
private String supplier;
-- ... getters, setters, constructors (they work fine, I can insert and get the data)
}
My controller looks like this:
#RestController
#RequestMapping(path = "api/v1/ingredient")
public class IngredientController {
private final IngredientService ingredientService;
#Autowired
public IngredientController(IngredientService ingredientService) {
this.ingredientService = ingredientService;
}
#GetMapping
public List<Ingredient> getIngredients(){
return ingredientService.getIngredients();
}
#PostMapping
public void registerNewStudent(#RequestBody Ingredient ingredient) {
ingredientService.saveIngredient(ingredient);
}
}
And my service class just uses the save() method from the JpaRepository to store new ingredients.
To this point I had the feeling, that I understood the whole thing, but when sending two post-requests to my application, each one containing an ingredient with the id "1234", and then showing all ingredients with a get request, the first ingredient just got replaced by the second one and there was no error or smth. like that in between.
Sending direct sql insert statements to the database with the same values throws an error, because the primary key constraint gets violated, just as it should be. Exactly this should have happened after the second post request (in my understanding).
What did I get wrong?
Update:
From the terminal output and the answers I got below, it is now clear, that the save() method can be understood as "insert or update if primary key is already existing".
But is there a better way around this than just error-handle every time when saving a new entry by hand?
The save method will create or update the entry if the id already exists. I'd switch to auto generating the ID when inserting, instead of manually creating the IDs. That would prevent the issue you have
When saving a new ingredient, jpa will perform an update if the value contained in the “id” field is already in the table.
A nice way through which you can achieve what you want is
ingredientRepository.findById(ingredientDTO.getIngredientId()).
ifPresentOrElse( ingredientEntity-> ResponseEntity.badRequest().build(), () -> ingredientRepository.save(ingredientDTO));
You can return an error if the entity is already in the table otherwise (empty lambda), you can save the new row
This is a downside to using CrudRepository save() on an entity where the id is set by the application.
Under the hood EntityManager.persist() will only be called if the id is null otherwise EntityManager.merge() is called.
Using the EntityManager directly gives you more fine grained control and you can call the persist method in your application when required
I am experiencing some strange behaviour with Hibernate Validator 6.0.9.Final when validating an object. Given the following POJO:
public class Log {
private Long uid;
#javax.validation.constraints.NotEmpty
#javax.validation.constraints.NotNull
private String value;
private long reference;
private String type;
private String createdDt;
private String createdBy;
private Boolean actioned;
private Long dxr;
// Constructors..
// Getters and setters...
// Equals and hashcode overrides...
}
Now, if I validate the entity using a new instance JUnit test I get the following output:
#Test
public void test() {
Log cLog = new Log();
ValidatorFactory validatorFactory = Validation.buildDefaultValidatorFactory();
Validator beanvalidator = validatorFactory.getValidator();
Set<ConstraintViolation<CommunicationLog>> constraintViolations = beanvalidator.validate(cLog);
System.out.println(constraintViolations);
}
[ConstraintViolationImpl{interpolatedMessage='must not be null', propertyPath=value, rootBeanClass=class Log, messageTemplate='{javax.validation.constraints.NotNull.message}'},
ConstraintViolationImpl{interpolatedMessage='must not be empty', propertyPath=value, rootBeanClass=class Log, messageTemplate='{javax.validation.constraints.NotEmpty.message}'}]
Which is fine. However, if I run this same code on a Hibernate/Jersey project, only the #NotNull validation runs. I cannot seem to get the #NotEmpty validator to run on the object.
I have tried removing the #NotNull validator (as the #NotEmpty validator takes care of it) but I left it on for completeness to demonstrate that the #NotEmpty validator is not returning anything in the image above.
I do not understand why it is not running when deployed to a web project but works fine when running under a JUnit test. Is there something I am missing here?
Okay, so I have resolved this. I hadn't realised that I had deployed the project to the wrong server. I had deployed it to a Payara 4.1.1.171 server instead of a Payara 5.181 server. However, I would have expected some kind of information to have been displayed - perhaps when deploying.
I have a Java class that is using the datastax cassandra driver to write a pojo to a cassandra table. Everything works fine, until it comes to having to write a class object to the cassandra table. It throws this error:
Caused by: com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [frozen< projKeySpace.smi > <-> code.generic.common.data.MyCustomSmiObject]
So I have tried a lot of different things to try and make the attribute "Frozen", but nothing works and I keep getting the same error. Here is an example of the class object.
#Table(keyspace="projkeyspace", name="summarytable")
public class DataGroupingObject implements Serializable {
#Column(name = "objid")
private String objId;
#Column(name = "timeofjob")
private Date timeOfJob;
#Column(name = "smiobjectinput")
#Frozen
//Have also tried:
//#Frozen("frozen<projKeySpace.smi>")
//#Frozen("frozen<smi>")
//#Frozen("frozen<MyCustomSmiObject>")
//And all other permutations I can think of...
private MyCustomSmiObject myCustomSmiObject; //The problem attribute
#Column(name = "column5")
private String dataForColumn5;
//Getters and setters....
}
So what am I overlooking? Digging into the datastax documentation didn't show much beyond this, http://docs.datastax.com/en/drivers/java/2.2/com/datastax/driver/mapping/annotations/Frozen.html , which I tried.
I also have tried having the MyCustomSmiObject be mapped to the frozen 'projkeyspace.smi' and that didn't work (of course I didn't think it would since there isn't actually a table in cassandra called smi, its just a type) but here is an example of it:
#Table(keyspace="projkeyspace", name="smi")
public class MyCustomSmiObject implements Serializable {
#Column(name = "idstring")
private String idString;
#Column(name = "valuenum")
private Double valueNum;
//Getters and Setters....
}
So like I said, I am at a loss. Any help would be greatly appreciated and thanks in advance!
smi is a UDT isn't it? In that case MyCustomSmiObject should be annotated with #UDT(keyspace="projkeyspace", name="smi") instead of #Table. By doing that, the driver should detect that this is a UDT and it will register a custom codec for it which will allow it to be able to properly serialize and deserialize it.
On another note the #Frozen annotation currently has no impact on the mapper, it is only informational at this time until the mapper has support for schema generation.
I have a form to fill a POJO called Father. Inside it, I have a FotoFather field.
When I save a new Father, I save automatically the object FotoFather (with Hibernate ORM pattern).
FotoFather.fotoNaturalUrl must be filled with the value of Father.id and here is the problem!
When i'm saving Father on the db, of course I still haven't Father.id value to fill FotoFather.fotoNaturalUrl. How can I solve this problem?
Thank you
#Entity
#Table(name = "father")
public class Father implements Serializable{
...
#Id
#Column(name = "id")
#GeneratedValue(strategy = GenerationType.AUTO)
private int id;
...
#OneToOne(targetEntity = FotoFather.class, fetch = FetchType.EAGER)
#JoinColumn(name = "fotoFather", referencedColumnName = "id")
#Cascade(CascadeType.ALL)
private FotoFather fotoFather;
}
FotoFather.class
#Entity
#Table(name = "foto_father")
public class FotoFather.class{
#Id
#Column(name = "id")
#GeneratedValue(strategy = GenerationType.AUTO)
private int id;
...
#Column(name = "foto_natural_url")
private String fotoNaturalUrl;
...
}
If you simply need the complete URL for some application-specific purpose, I would likely err on the side of not trying to store the URL with the ID at all and instead rely on a transient method.
public class FotoFather {
#Transient
public String getNaturalUrl() {
if(fotoNaturalUrl != null && fotoNaturalUrl.trim().length > 0) {
return String.format("%s?id=%d", fotoNaturalUrl, id);
}
return "";
}
}
In fact, decomposing your URLs even more into their minimalist variable components and only storing those in separate columns can go along way in technical debt, particularly if the URL changes. This way the base URL could be application-configurable and the variable aspects that control the final URL endpoint are all you store.
But if you must know the ID ahead of time (or as in a recent case of mine, keep identifiers sequential without loosing a single value), you need to approach this where FotoFather identifiers are generated prior to persisting the entity, thus they are not #GeneratedValues.
In order to avoid issues with collisions at insertion, we have a sequence service class that exposes support for fetching the next sequence value by name. The sequence table row is locked at read and updated at commit time. This prevents multiple sessions from concurrency issues with the same sequence, prevents gaps in the range and allows for knowing identifiers ahead of time.
#Transactional
public void save(Father father) {
Assert.isNotNull(father, "Father cannot be null.");
Assert.isNotNull(father.getFotoFather(), "FotoFather cannot be null.");
if(father.getFotoFather().getId() == null) {
// joins existing transaction or errors if one doesn't exist
// when sequenceService is invoked.
Long id = sequenceService.getNextSequence("FOTOFATHER");
// updates the fotofather's id
father.getFotoFather().setId(id);
}
// save.
fatherRepository.save(father);
}
I think you can do be registering an #PostPersist callback on your Father class. As the JPA spec notes:
The PostPersist and PostRemove callback methods are invoked for an
entity after the entity has been made persistent or removed. These
callbacks will also be invoked on all entities to which these
operations are cascaded. The PostPersist and PostRemove methods will
be invoked after the database insert and delete operations
respectively. These database operations may occur directly after the
persist, merge, or remove operations have been invoked or they may
occur directly after a flush operation has occurred (which may be at
the end of the transaction). Generated primary key values are
available in the PostPersist method.
So, the callback should be called immediately after the Father instance is written to the database and before the FotoFather instance is written.
public class Father(){
#PostPersist
public void updateFotoFather(){
fotofather.setNaturalUrl("/xyz/" + id);
}
}
I have 2 objects joined together defined as such:
public class A {
...
#Id
#Column(name = "A_ID")
#SequenceGenerator(...)
#GeneratedValue(...)
public Long getA_ID();
#OneToOne(mappedBy = "a", fetch = FetchType.LAZY, cascade = CascadeType.ALL, targetEntity = B.class)
public B getB();
...
}
#VirtualAccessMethods(get = "getMethod", set = "setMethod")
public class B {
...
#Id
public Long getA_ID();
#MapsId
#OneToOne(fetch = FetchType.LAZY, cascade = CascadeType.ALL ,targetEntity = A.class)
#JoinColumn(name="A_ID")
public A getA();
getMethod(String name);
setMethod(String name, Object value);
...
}
When I go to em.merge(A) with B joined onto A for an INSERT, everything works fine. However if I do the same thing for an update, it will update only A. The update logic is like so:
#Transactional
public void update(Object fieldOnANewValue, Object fieldOnBNewField) {
A objA = em.executeQuery(...) //loads objA by primary key
objA.setFieldOnA(fieldOnANewValue);
B objB = objA.getB(); //lazy loads objB
objB.setMethod("FieldOnB", fieldOnBNewValue);
}
If I look at the logs, there is a SQL UPDATE statement committing the changes I made to A, but nothing for B. If I manually call em.merge(objB) the same issue exists. Does anyone know exactly what EclipseLink does to determine whether or not to generate an UPDATE statement? Particularly with regard to #VirtualAccessMethods? However, I have had the #OneToOne mappings setup differently before and em.merge(objB) worked fine then, plus INSERT works, so I'm not sure if that's the issue. On the flip side, if I have another object that is also joined onto A, but just is a normal POJO like A is, the UPDATE statement is generated for that. Caching is turned off, and I've verified that the objects are updated correctly before merge is called.
Please show the complete code and mappings.
Given you are using virtual access (are you using this correctly?), it could be some sort of change tracking issue related to the virtual access. Does the issue occur without using virtual access?
Try setting,
#ChangeTracking(ChangeTrackingType.DEFERRED)
to see if this has an affect.
You could also try,
#InstantiationCopyPolicy