I'm writing a project for college and I've encountered some strange phenomena.
The program supposed to serve a restaurant so it has a server side that manages all the needs of the different front ends. the different front ends are "dinner terminal", "kitchen terminal", "waiter terminal" and an "admin terminal".
When I add an object to the DB I see it in the DB and the kitchen terminal receives it and I see that the object it gets is the right one.
public void addSessionOrder(String id, SessionOrder sessionOrder)
{
Session context = clientSessions.get(id);
context.beginTransaction();
context.save(sessionOrder);
context.getTransaction()
.commit();
}
notice that each terminal (connection) has it's own session in hibernate.
however, once I try to update the status of a sessionorder i get this exception
java.lang.NullPointerException
at database.DatabaseContext.updateSessionOrderStatus(DatabaseContext.java:170)
at protocol.handlers.UpdateSessionOrderStatusHandler.handle(UpdateSessionOrderStatusHandler.java:35)
at server.ResturantServer.handleClientMessage(ResturantServer.java:126)
at server.ConnectionManager.handleClientMessage(ConnectionManager.java:86)
at server.SockJSSocketHandler$3.handle(SockJSSocketHandler.java:55)
this is the method:
public void updateSessionOrderStatus(String id, SessionOrder order, OrderStatus newStatus)
{
Session context = clientSessions.get(id);
context.beginTransaction();
SessionOrder ord = (SessionOrder)context.get(SessionOrder.class, order.getOrderId());
ord.setStatus(newStatus);
context.update(ord);
context.getTransaction()
.commit();
}
The line that throws the exception is "ord.setStatus(newStatus);"
after debugging this is the info I have:
fields id and sessionOrder contain legit data and are initiated as needed.
sessionOrder.getOrderId() returns the needed ID for a corresponding object in the DB (it exists)
the query on the DB return null to ord.
Another thing I've noticed, if I turn the server off (kill hibernate) and restart everything, the whole things works fine. So I believe it has something to do with the fact that some session X inserted the object to the DB, and some other session Y tries to retrieve it afterwards.
the sessions are different and originate from different connections, and the specific order exists in the DB so I see no reason for it not to return the value.
I think it has something to do with caching of the model, however I'm quite a noob in hibernate so I can't pin point the problem.
sessionOrder.getOrderId() returns the needed ID for a corresponding object in the DB (it exists) the query on the DB return null to ord.
The ord object is null so it throws a NPE. I guess that hibernate cannot find the order with the given ID.
Add some logging here and you'll see what causes you trouble
Related
I try to build a booking portal. A booking has a checkin datetime and a checkout datetime. The Spring Boot application will run in many replicas.
My problem is to ensure that there is no overlapping possible.
First of all here is my respository method to check if the time is blocked:
#Lock(LockModeType.PESSIMISTIC_READ)
#QueryHints({#QueryHint(name = "javax.persistence.lock.timeout", value = "1000")})
Optional<BookingEntity> findFirstByRoomIdAndCheckInBeforeAndCheckOutAfter(Long roomId, LocalDateTime checkIn, LocalDateTime checkOut);
As you can see I am using findFirstBy and not findBy because the request could get more than 1 result.
With this i can check if this time is blocked (please notice in the call i switch the requested checkin and requested checkout:
findFirstByWorkplaceIdAndCheckInBeforeAndCheckOutAfter(workplaceId, requestCheckOut, requestCheckIn).isPresent();
And everything is happen in the controller:
#PostMapping("")
#Transactional
public void myController(LocalDateTime checkIn, LocalDateTime checkOut, long roomId) {
try {
if (myService.bookingBlocked(checkIn, checkOut, roomId)) {
Log.warning("Booking blocked");
return "no!";
}
bookingService.createBooking(checkIn, checkOut, roomId);
return "good job";
} catch (Exception exception) {
return "something went wrong";
}
}
Everything works fine but I can simulate an overlapping booking if i set a breakpoint after the bookingBlocked-check in two replicas. Following happens:
Replica 1 checks if the booking free (its ok)
Replica 2 checks if the booking free (its ok)
Replica 1 creates new entity
Replica 2 creates new entity
Now I have a overlapping booking.
My idea was to create a #Constraint but this does not possible with hibernate in MySql dialect. Than I tried to create a #Check but this seems also to be impossible.
Now my idea was to create a Lock with Transaction (is already in the code at the top). This seems to work but I am not sure if I have this implemented correct. If I try the same as before with the breakpoints following happen:
Replica 1 checks if the booking free (its ok)
Replica 2 checks if the booking free (its ok)
Replica 1 creates new entity
Replica 2 creates new entity (throws exception and silent rollbacked)
Controller returns a 500.
I am not sure where the exception is thrown. I am not sure what happens when the replica is shutdown durring the lock. I am not sure if i can create a deadlock in the database. And it is still possible to manipulate the database via sql query to create a overlapping.
Could anybody say me if this the correct way? Is there a possibility to create a constraint on database with hibernate (i dont use migration scripts and only for one constraint it will be not cool)? Should I use optimistic locking? Every idea is welcome.
I have a somewhat strange situation which I need to deal with, but can't seem to find a solution.
I need to solve a potential race condition on a customer insertion. We receive the customers through a topic, so they come with an id(we keep it because it's the same id we have in a different database for a different microservice). So, if by some chance, after the same customer is committed to the database before the flush operation is actioned, we should update the record in the database with the one that arrived through the topic, if the last activity field on that one is after the last activity field on the db entry.
The problem we encounter is that, while the flush option is recognizes the newly committed consumer and throws the ConstraintViolationException, when it gets to the find line it returns the customer we try to persist above, not the customer in the database
The code breaks down like this.
try{
entityManager.persist(customer);
//at this point, I insert a new customer in the database with the same id as the one I've persisted
entityManager.flush();
}catch(PersistenceException e){
if(e.getCause() instanceof ConstraintViolationException) {
dbCustomer = Optional.of(entityManager.find(Customer.class,
customer.getId()));
//update DB Customer with data from persisted customer if the last update date on the persisted customer is after the one on the db customer
}
}
I tried different options of transaction propagation, with no success, however, and to use the detach(customer) method before trying to find the db customer, however, in this case, the find function returns Null
Thanks
As soon as a flush fails, the persistence context is essentially broken. If you need to do something with the result of this code block that needs flushing, you need to do that in a new transaction in case of a constraint violation.
I've been struggling for few hours with this one and could do with some help.
A client sends an object that contains a list;
One of the objects in the list has been modified on the client;
In some cases I don't want that modified entity to be persisted to the database, I want to keep the original database values.
I have tried the following and various attempts to clear(), refresh() and flush() the session:
List<Integer> notToModifyIds = dao.getDoNotModifyIds(parentEntity.getId());
MyEntityFromList entityFromClient, entityFromDb;
for(Integer notToModifyId : notToModifyIds){
ListIterator iterator = parentEntity.getEntities().listIterator();
while(iterator.hasNext()){
entityFromClient = (MyEntity) iterator.next();
if(Objects.equals(entityFromClient.getId(), notToModifyId)){
dao.evict(entityFromClient);
entityFromDb = (MyEntity) dao.get(MyEntity.class, notToModifyId);
iterator.remove(entityFromClient);
iterator.add(entityFromDb);
}
}
}
However, no matter what I try I always get the values from the client persisted to the database. When I add a breakpoint after iterator.add() I can check that the database value has not been updated at that point, hence I know that if I could load the entity from the DB then I would have the value I want.
I'm feeling a little suppid!
I don't know if I got the whole scenario here. Are those modified "entitiesFromClient" attached to the Hibernate session? If they are, the changes were probably automatically flushed to the database before you "evicted" them.
Setting a MANUAL flush mode would help you avoid the automatic behaviour.
First of all, I would enable the Hibernate SQL logging to see more precisely what is happening. See Enable Hibernate logging.
Checking the database in another session (while stopped in the breakpoint) will not help if this code is running within a transaction. Even if the change was already flushed in the database you wouldn't see it until the transaction is commited.
I have a hibernate query (hibernate 3) that only reads data from the database. The database is updated by a separate application and the query result does not reflect the changes in the database.
With a bit of research, I think it may have something to do with the Hibernate L2 cache (I don't think it's the L1 cache since I always open a new session and close it after it's done).
Session session = sessionFactoryWrapper.getSession();
List<FlowCount> result = session.createSQLQuery(flowCountSQL).list();
session.close();
I tried disabling the second-layer cache in the hibernate config file but it's not working:
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.cache.use_query_cache">false</property>
<propertyname="cache.provider_class">org.hibernate.cache.NoCacheProvider</property>
I also added session.setCacheMode(CacheMode.Refresh); after Session session = sessionFactoryWrapper.getSession(); to force a refresh on the L1 cache but still not working...
Is there another way to pick up the changes in the database? Am I doing something wrong on how to disable the cache? Thanks.
Update:
I did another experiment by monitoring the database query log:
Run the code the 1st time. Check the log. The query shows up.
Wait a few minutes. The data has changed by another application. I verified it through MySql Workbench. To distinguish from the previous query I add a dummy condition.
Run the code the 2nd time. Check the log and the query shows up.
Both time I'm using the same query but since the data has changed, the result should be different but somehow it's not...
In order to force a L1 cache refresh you can use the refresh(Object) method of Session.
From the Hibernate Docs,
Re-read the state of the given instance from the underlying database.
It is inadvisable to use this to implement long-running sessions that
span many business tasks. This method is, however, useful in certain
special circumstances. For example
where a database trigger alters the object state upon insert or update
after executing direct SQL (eg. a mass update) in the same session
after inserting a Blob or Clob
Moreover you mentioned that you added session.setCacheMode(CacheMode.Refresh) to force a refresh on the L1 cache. This won't work because, CacheMode doesn't have to do anything with L1 cache. From the Hibernate Docs again,
CacheMode controls how the session interacts with the second-level
cache and query cache.
Without second-level cache and query cache, hibernate will always fetch all data from database in a new session.
You can check which query exactly is executed by Hibernate by enabling DEBUG log level for org.hibernate package (and TRACE level for org.hibernate.type if you want to see bound variables).
How old of a change is the query reflecting? If it is showing the changes after sometime, it might have to do with how you obtain your session.
I am not familiar with the SessionFactoryWrapper class, is this a custom class that you wrote? Are you somehow caching the session object longer than it is necessary? If so, the query will be reusing the objects if it has already been loaded in the session. This is the idea behind the repeatable read semantics that Hibernate guarantees.
You can clear the session before running your query and it will then return the latest data.
Hibernate's built-in connection pooling mechanism is bugged.
Replace it with a production quality alternative like c3p0.
I had the exact same issue where stale data was returned until I started using c3p0.
Just in case it IS the 1st Level Cache
Can you show the query you make ?
See following Bugs:
https://hibernate.atlassian.net/browse/HHH-9367
https://jira.grails.org/browse/GRAILS-11645
Additional:
http://howtodoinjava.com/2013/07/01/understanding-hibernate-first-level-cache-with-example/
http://www.dineshonjava.com/p/cacheing-in-hibernate-first-level-and.html#.VhZ7o3VElhE
Repeatable finder problem caused by Hibernates 1st Level Cache
To be clear, both test succeed - not logically at all:
userByEmail('foo#bar.com').email != 'foo#bar.com'
Complete Test
#Issue('https://jira.grails.org/browse/GRAILS-11645')
class FirstLevelCacheSpec extends IntegrationSpec {
def sessionFactory
def setup() {
User.withNewSession {
User user = new User(email: 'test#test.org', password: 'test-password')
user.save(flush: true, failOnError: true)
}
}
private void updateObjectInNewSession(){
User.withNewSession {
def u = User.findByEmail('test#test.org', [cache: false])
u.email = 'foo#bar.com'
u.save(flush: true, failOnError: true)
}
}
private User userByEmail(String email){
User.findByEmail(email, [cache: false])
}
def "test first update"() {
when: 'changing the object in another session'
updateObjectInNewSession()
then: 'retrieving the object by changed identifier (no 2nd level cache)'
userByEmail('foo#bar.com').email == 'foo#bar.com'
}
def "test stale object in 1st level"() {
when: 'changing the object after pulling objects to cache by finder'
userByEmail('test#test.org')
updateObjectInNewSession()
then: 'retrieving the object by changed identifier (no 2nd level cache)'
userByEmail('foo#bar.com').email != 'foo#bar.com'
}
}
I have an application which has a table and when you click on an item in the table it fills in a group of textfields with its data (FieldGroup), and then you have the option of saving the changes I was wondering how would I save the changes the user makes to my postgres database. I am using vaadin and hibernate for this application. So far I have tried to do
editorField.commit() // after the user clicks the save button
I have tried
editorField.commit()
hbsession.persist(editorField) //hbsession is the name of my Session
and I have also tried
editorField.commit();
hbsession.save(editorField);
The last two ones give me the following error
Caused by: org.hibernate.SessionException: Session is closed!
Well, the first thing you need to realize is Vaadin differs from conventional request/response web framework. Actually, Vaadin is *event driven* framework very similar to Swing. It builds a application context from very first click of the user and holds it during whole website visit. The problem is there is no entry request point you can start hibernate session and no response point to close. There are tons of requests during a single click on button.
So, entitymanager-per-request pattern is completely useless. It is better to use one standalone em or em-per-session pattern with hibernate.connection_release after_transaction to keep connection pool low.
To the JPAContianer, it is not usable as far you need to refresh the container or you have to handle beans with relations. Also, I did not manage to get it working with batch load, so every reading of entry or relation equals one select to DB. Do not support lazy loading.
All you need is open EM/session. Try to use suggested patters or open EM/session every transaction and merge your bean first.
Your question is quite complex and hard to answer, but I hope these links help you get into:
Pojo binding strategy for hibernate
https://vaadin.com/forum#!/thread/39712
MVP-lite
https://vaadin.com/directory#addon/mvp-lite (stick with event driven pattern)
I have figured out how to make changes to the database here is some code to demonstrate:
try {
/** define the session and begin it **/
hbsession = HibernateUtil.getSessionFactory().getCurrentSession();
hbsession.beginTransaction();
/** table is the name of the Bean class linked to the corresponding SQL table **/
String query = "UPDATE table SET name = " + textfield.getValue();
/** Run the string as an SQL query **/
Query q = hbsession.createQuery(query);
q.executeUpdate(); /** This command saves changes or deletes a entry in table **/
hbsession.getTransaction().commit();
} catch (RuntimeException rex) {
hbsession.getTransaction().rollback();
throw rex;
}