I try to build a booking portal. A booking has a checkin datetime and a checkout datetime. The Spring Boot application will run in many replicas.
My problem is to ensure that there is no overlapping possible.
First of all here is my respository method to check if the time is blocked:
#Lock(LockModeType.PESSIMISTIC_READ)
#QueryHints({#QueryHint(name = "javax.persistence.lock.timeout", value = "1000")})
Optional<BookingEntity> findFirstByRoomIdAndCheckInBeforeAndCheckOutAfter(Long roomId, LocalDateTime checkIn, LocalDateTime checkOut);
As you can see I am using findFirstBy and not findBy because the request could get more than 1 result.
With this i can check if this time is blocked (please notice in the call i switch the requested checkin and requested checkout:
findFirstByWorkplaceIdAndCheckInBeforeAndCheckOutAfter(workplaceId, requestCheckOut, requestCheckIn).isPresent();
And everything is happen in the controller:
#PostMapping("")
#Transactional
public void myController(LocalDateTime checkIn, LocalDateTime checkOut, long roomId) {
try {
if (myService.bookingBlocked(checkIn, checkOut, roomId)) {
Log.warning("Booking blocked");
return "no!";
}
bookingService.createBooking(checkIn, checkOut, roomId);
return "good job";
} catch (Exception exception) {
return "something went wrong";
}
}
Everything works fine but I can simulate an overlapping booking if i set a breakpoint after the bookingBlocked-check in two replicas. Following happens:
Replica 1 checks if the booking free (its ok)
Replica 2 checks if the booking free (its ok)
Replica 1 creates new entity
Replica 2 creates new entity
Now I have a overlapping booking.
My idea was to create a #Constraint but this does not possible with hibernate in MySql dialect. Than I tried to create a #Check but this seems also to be impossible.
Now my idea was to create a Lock with Transaction (is already in the code at the top). This seems to work but I am not sure if I have this implemented correct. If I try the same as before with the breakpoints following happen:
Replica 1 checks if the booking free (its ok)
Replica 2 checks if the booking free (its ok)
Replica 1 creates new entity
Replica 2 creates new entity (throws exception and silent rollbacked)
Controller returns a 500.
I am not sure where the exception is thrown. I am not sure what happens when the replica is shutdown durring the lock. I am not sure if i can create a deadlock in the database. And it is still possible to manipulate the database via sql query to create a overlapping.
Could anybody say me if this the correct way? Is there a possibility to create a constraint on database with hibernate (i dont use migration scripts and only for one constraint it will be not cool)? Should I use optimistic locking? Every idea is welcome.
Related
I'm trying to roll back the changes to a postgres table inbetween component tests so each one has a clean db to work with.
I'm using liquibase to set up postgres (the changelog xml to describe the setup and then the liquibase-core Kotlin/Java library to apply it). I'm also using Hibernate to interact with postgres directly. The test framework I'm using is Kotest, using the beforeXXX methods to make sure all the setup happens before the tests run. The database is set up once before everything runs and the idea is to rollback after each test.
From looking in the docs I've found tagDatabase and rollback seem to be what I need, however when running them they don't seem to actually roll anything back.
The code is roughly as follows (this is just test code to see if it works at all, mind - code would ideally be segmented as I descirbed above):
// 1 - (Pre-all-tests) Postgres Setup
liquibase = Liquibase(
"/db/changelog/changelog-master.xml",
ClassLoaderResourceAccessor(),
DatabaseFactory.getInstance().findCorrectDatabaseImplementation(JdbcConnection(connection))
)
liquibase.update(Contexts(), LabelExpression())
liquibase.tag("initialised")
// 2 - Something is inserted
val newEntity = ThingEntity()
entityManager.persist(
entity
)
entityManager.transaction.commit()
entityManager.clear()
// 3 - Cleanup
liquibase.rollback("initialised", Contexts())
// 4 - Fetching
entityManager.find(ThingEntity::class.java, id)
Thing is, after running liquibase.rollback the newEntity I persisted earlier is still present. The tag has dissapeared - if I run the doesTagExist method it returns true and then false after the rollback so the tag is being removed at least.
Given I'm clearing the entity manager after the commit I don't think it's because it's being cached and as I said the tag is being removed - just not the data.
Can anyone tell my why the actual transactions (i.e. the persist) aren't being erased?
Thanks!
Looks like you are using liquibase in a wrong way. What you are trying to have (rollback of data that is added in unit-test) is something close to what is described here: Rollback transaction after #Test
And when you are asking liquibase to rollback to some tag it just executes rollback scripts (if any provided) for changesets that were applied after changeset with tag: https://docs.liquibase.com/commands/community/rollbackbytag.html
I'm working on my personal project, and there I observe a strange behaviour in spring CrudRepository.save. There is a unique constraint in one of the field. When I save a new record with a duplicate value for this field, I didn't get any exception until the request handler method completes. Is this a normal behaviour?
DB is Postgres
RuleSet save = ruleSetRepository.save(convertToRuleSet(request));
fileRepository.createRuleSet(request);
try {
gitRepository.commitAddPush(request.getRuleSetName(), "Added rule set " + request.getRuleSetName(), gitVersion);
} catch (GenericGitException gitException) {
fileRepository.deleteClassDirectory(request.getRuleSetName());
fileRepository.deleteRuleSet(request.getRuleSetName());
throw new CommonRuleCreateException(gitException.getMessage());
}
return new RuleSetResponse(save.getId(), save.getName(), save.getDescription(),save.getPackageName());
This entire method get called without any exception.
What you might be missing is that save method will commit to DB after transaction is completed, generally end of method execution. If you want to save to DB at that time only, use saveAndFlush.
Also if you want, you can make sure that your repo methods are using a new transactions and not same as that of its caller methods. So that when repo method call is completed, it will save transaction data into DB.
I have a hibernate query (hibernate 3) that only reads data from the database. The database is updated by a separate application and the query result does not reflect the changes in the database.
With a bit of research, I think it may have something to do with the Hibernate L2 cache (I don't think it's the L1 cache since I always open a new session and close it after it's done).
Session session = sessionFactoryWrapper.getSession();
List<FlowCount> result = session.createSQLQuery(flowCountSQL).list();
session.close();
I tried disabling the second-layer cache in the hibernate config file but it's not working:
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.cache.use_query_cache">false</property>
<propertyname="cache.provider_class">org.hibernate.cache.NoCacheProvider</property>
I also added session.setCacheMode(CacheMode.Refresh); after Session session = sessionFactoryWrapper.getSession(); to force a refresh on the L1 cache but still not working...
Is there another way to pick up the changes in the database? Am I doing something wrong on how to disable the cache? Thanks.
Update:
I did another experiment by monitoring the database query log:
Run the code the 1st time. Check the log. The query shows up.
Wait a few minutes. The data has changed by another application. I verified it through MySql Workbench. To distinguish from the previous query I add a dummy condition.
Run the code the 2nd time. Check the log and the query shows up.
Both time I'm using the same query but since the data has changed, the result should be different but somehow it's not...
In order to force a L1 cache refresh you can use the refresh(Object) method of Session.
From the Hibernate Docs,
Re-read the state of the given instance from the underlying database.
It is inadvisable to use this to implement long-running sessions that
span many business tasks. This method is, however, useful in certain
special circumstances. For example
where a database trigger alters the object state upon insert or update
after executing direct SQL (eg. a mass update) in the same session
after inserting a Blob or Clob
Moreover you mentioned that you added session.setCacheMode(CacheMode.Refresh) to force a refresh on the L1 cache. This won't work because, CacheMode doesn't have to do anything with L1 cache. From the Hibernate Docs again,
CacheMode controls how the session interacts with the second-level
cache and query cache.
Without second-level cache and query cache, hibernate will always fetch all data from database in a new session.
You can check which query exactly is executed by Hibernate by enabling DEBUG log level for org.hibernate package (and TRACE level for org.hibernate.type if you want to see bound variables).
How old of a change is the query reflecting? If it is showing the changes after sometime, it might have to do with how you obtain your session.
I am not familiar with the SessionFactoryWrapper class, is this a custom class that you wrote? Are you somehow caching the session object longer than it is necessary? If so, the query will be reusing the objects if it has already been loaded in the session. This is the idea behind the repeatable read semantics that Hibernate guarantees.
You can clear the session before running your query and it will then return the latest data.
Hibernate's built-in connection pooling mechanism is bugged.
Replace it with a production quality alternative like c3p0.
I had the exact same issue where stale data was returned until I started using c3p0.
Just in case it IS the 1st Level Cache
Can you show the query you make ?
See following Bugs:
https://hibernate.atlassian.net/browse/HHH-9367
https://jira.grails.org/browse/GRAILS-11645
Additional:
http://howtodoinjava.com/2013/07/01/understanding-hibernate-first-level-cache-with-example/
http://www.dineshonjava.com/p/cacheing-in-hibernate-first-level-and.html#.VhZ7o3VElhE
Repeatable finder problem caused by Hibernates 1st Level Cache
To be clear, both test succeed - not logically at all:
userByEmail('foo#bar.com').email != 'foo#bar.com'
Complete Test
#Issue('https://jira.grails.org/browse/GRAILS-11645')
class FirstLevelCacheSpec extends IntegrationSpec {
def sessionFactory
def setup() {
User.withNewSession {
User user = new User(email: 'test#test.org', password: 'test-password')
user.save(flush: true, failOnError: true)
}
}
private void updateObjectInNewSession(){
User.withNewSession {
def u = User.findByEmail('test#test.org', [cache: false])
u.email = 'foo#bar.com'
u.save(flush: true, failOnError: true)
}
}
private User userByEmail(String email){
User.findByEmail(email, [cache: false])
}
def "test first update"() {
when: 'changing the object in another session'
updateObjectInNewSession()
then: 'retrieving the object by changed identifier (no 2nd level cache)'
userByEmail('foo#bar.com').email == 'foo#bar.com'
}
def "test stale object in 1st level"() {
when: 'changing the object after pulling objects to cache by finder'
userByEmail('test#test.org')
updateObjectInNewSession()
then: 'retrieving the object by changed identifier (no 2nd level cache)'
userByEmail('foo#bar.com').email != 'foo#bar.com'
}
}
I use MyBatis 3.1.
I have two use cases when I need to bypass MyBatis local cache and directly hit the DB.
Since MyBatis configuration file only have global settings, it is not applicable to my case, because I need it as an exception, not as a default. Attributes of MyBatis <select> XML statement do not seem to include this option.
Use case 1: 'select sysdate from dual'.
MyBatis caching causes this one to always return the same value within a MyBatis session. This causes an issue in my integration test, when I try to replicate a situation of an outdated entry.
My workaround was just to use a plain JDBC call.
Use case 2: 'select' from one thread does not always see the value written by another thread.
Thread 1:
SomeObject stored = dao.insertSomeObject(obj);
runInAnotherThread(stored.getId());
//complete and commit
Thread 2:
//'id' received as an argument provided to 'runInAnotherThread(...)'
SomeObject stored = dao.findById(id);
int count = 0;
while(stored == null && count < 300) {
++count;
Thread.sleep(1000);
stored = dao.findById(id);
}
if (stored == null) {
throw new MyException("There is no SomeObject with id="+id);
}
I occasionally receive MyException errors on a server, but can't reproduce on my local machine. In all cases the object is always in the DB. So I guess the error depends on whether the stored object was in MyBatis local cache at the first time, and waiting for 5 minutes does not help, since it never checks the actual DB.
So my question is how to solve the above use cases within MyBatis without falling back to the plain JDBC?
Being able just to somehow signal MyBatis not to use a cached value in a specific call (the best) or in all calls to a specific query would be the preferred option, but I will consider any workaround as well.
I don't know a way to bypass local cache but there are two options how to achieve what you need.
The first option is to set flushCache="true" on select. This will clear the cache after statement execution so next query will hit database.
<select id="getCurrentDate" resultType="date" flushCache="true">
SELECT SYSDATE FROM DUAL
</select>
Another option is to use STATEMENT level local cache. By default local cache is used during SESSION (which is typically translates to transaction). This is specified by localCacheScope option and is set per session factory. So this will affect all queries using this mybatis session factory.
Let me summarize.
The solution from the previous answer, 'flushCache="true"' option on the query, works and solves both use cases. It will flush cache after every such 'select', so the next 'select' statement will hit the DB. Although it works after the 'select' statement is executed, it's OK since the cache is empty anyway before the first 'select'.
Another solution is to start a new session. I use Spring, so it's enough to mark a method with #Transactional(propagation = Propagation.REQUIRES_NEW). Since MyBatis session is tied to Spring transaction, this will cause to create another MyBatis session with fresh cache every time the method is called.
By some reason, the MyBatis option 'useCache="false"' in the query does not work.
The following Options annotation can be used:
#Options(useCache=false, flushCache=FlushCachePolicy.TRUE)
Apart from answers by Roman and Alexander there is one more solution for this:
Configuration configuration = MyBatisUtil.getSqlSessionFactory().getConfiguration();
Collection<Cache> caches = configuration.getCaches();
//If you have multiple caches and want a particular to get deleted.
// Cache cache = configuration.getCache("PPL"); // namespace of particular XML
for (Cache cache : caches) {
Lock w = cache.getReadWriteLock().writeLock();
w.lock();
try {
cache.clear();
} finally {
w.unlock();
}
}
I'm writing a project for college and I've encountered some strange phenomena.
The program supposed to serve a restaurant so it has a server side that manages all the needs of the different front ends. the different front ends are "dinner terminal", "kitchen terminal", "waiter terminal" and an "admin terminal".
When I add an object to the DB I see it in the DB and the kitchen terminal receives it and I see that the object it gets is the right one.
public void addSessionOrder(String id, SessionOrder sessionOrder)
{
Session context = clientSessions.get(id);
context.beginTransaction();
context.save(sessionOrder);
context.getTransaction()
.commit();
}
notice that each terminal (connection) has it's own session in hibernate.
however, once I try to update the status of a sessionorder i get this exception
java.lang.NullPointerException
at database.DatabaseContext.updateSessionOrderStatus(DatabaseContext.java:170)
at protocol.handlers.UpdateSessionOrderStatusHandler.handle(UpdateSessionOrderStatusHandler.java:35)
at server.ResturantServer.handleClientMessage(ResturantServer.java:126)
at server.ConnectionManager.handleClientMessage(ConnectionManager.java:86)
at server.SockJSSocketHandler$3.handle(SockJSSocketHandler.java:55)
this is the method:
public void updateSessionOrderStatus(String id, SessionOrder order, OrderStatus newStatus)
{
Session context = clientSessions.get(id);
context.beginTransaction();
SessionOrder ord = (SessionOrder)context.get(SessionOrder.class, order.getOrderId());
ord.setStatus(newStatus);
context.update(ord);
context.getTransaction()
.commit();
}
The line that throws the exception is "ord.setStatus(newStatus);"
after debugging this is the info I have:
fields id and sessionOrder contain legit data and are initiated as needed.
sessionOrder.getOrderId() returns the needed ID for a corresponding object in the DB (it exists)
the query on the DB return null to ord.
Another thing I've noticed, if I turn the server off (kill hibernate) and restart everything, the whole things works fine. So I believe it has something to do with the fact that some session X inserted the object to the DB, and some other session Y tries to retrieve it afterwards.
the sessions are different and originate from different connections, and the specific order exists in the DB so I see no reason for it not to return the value.
I think it has something to do with caching of the model, however I'm quite a noob in hibernate so I can't pin point the problem.
sessionOrder.getOrderId() returns the needed ID for a corresponding object in the DB (it exists) the query on the DB return null to ord.
The ord object is null so it throws a NPE. I guess that hibernate cannot find the order with the given ID.
Add some logging here and you'll see what causes you trouble