I'm working on my personal project, and there I observe a strange behaviour in spring CrudRepository.save. There is a unique constraint in one of the field. When I save a new record with a duplicate value for this field, I didn't get any exception until the request handler method completes. Is this a normal behaviour?
DB is Postgres
RuleSet save = ruleSetRepository.save(convertToRuleSet(request));
fileRepository.createRuleSet(request);
try {
gitRepository.commitAddPush(request.getRuleSetName(), "Added rule set " + request.getRuleSetName(), gitVersion);
} catch (GenericGitException gitException) {
fileRepository.deleteClassDirectory(request.getRuleSetName());
fileRepository.deleteRuleSet(request.getRuleSetName());
throw new CommonRuleCreateException(gitException.getMessage());
}
return new RuleSetResponse(save.getId(), save.getName(), save.getDescription(),save.getPackageName());
This entire method get called without any exception.
What you might be missing is that save method will commit to DB after transaction is completed, generally end of method execution. If you want to save to DB at that time only, use saveAndFlush.
Also if you want, you can make sure that your repo methods are using a new transactions and not same as that of its caller methods. So that when repo method call is completed, it will save transaction data into DB.
Related
I try to build a booking portal. A booking has a checkin datetime and a checkout datetime. The Spring Boot application will run in many replicas.
My problem is to ensure that there is no overlapping possible.
First of all here is my respository method to check if the time is blocked:
#Lock(LockModeType.PESSIMISTIC_READ)
#QueryHints({#QueryHint(name = "javax.persistence.lock.timeout", value = "1000")})
Optional<BookingEntity> findFirstByRoomIdAndCheckInBeforeAndCheckOutAfter(Long roomId, LocalDateTime checkIn, LocalDateTime checkOut);
As you can see I am using findFirstBy and not findBy because the request could get more than 1 result.
With this i can check if this time is blocked (please notice in the call i switch the requested checkin and requested checkout:
findFirstByWorkplaceIdAndCheckInBeforeAndCheckOutAfter(workplaceId, requestCheckOut, requestCheckIn).isPresent();
And everything is happen in the controller:
#PostMapping("")
#Transactional
public void myController(LocalDateTime checkIn, LocalDateTime checkOut, long roomId) {
try {
if (myService.bookingBlocked(checkIn, checkOut, roomId)) {
Log.warning("Booking blocked");
return "no!";
}
bookingService.createBooking(checkIn, checkOut, roomId);
return "good job";
} catch (Exception exception) {
return "something went wrong";
}
}
Everything works fine but I can simulate an overlapping booking if i set a breakpoint after the bookingBlocked-check in two replicas. Following happens:
Replica 1 checks if the booking free (its ok)
Replica 2 checks if the booking free (its ok)
Replica 1 creates new entity
Replica 2 creates new entity
Now I have a overlapping booking.
My idea was to create a #Constraint but this does not possible with hibernate in MySql dialect. Than I tried to create a #Check but this seems also to be impossible.
Now my idea was to create a Lock with Transaction (is already in the code at the top). This seems to work but I am not sure if I have this implemented correct. If I try the same as before with the breakpoints following happen:
Replica 1 checks if the booking free (its ok)
Replica 2 checks if the booking free (its ok)
Replica 1 creates new entity
Replica 2 creates new entity (throws exception and silent rollbacked)
Controller returns a 500.
I am not sure where the exception is thrown. I am not sure what happens when the replica is shutdown durring the lock. I am not sure if i can create a deadlock in the database. And it is still possible to manipulate the database via sql query to create a overlapping.
Could anybody say me if this the correct way? Is there a possibility to create a constraint on database with hibernate (i dont use migration scripts and only for one constraint it will be not cool)? Should I use optimistic locking? Every idea is welcome.
I have a somewhat strange situation which I need to deal with, but can't seem to find a solution.
I need to solve a potential race condition on a customer insertion. We receive the customers through a topic, so they come with an id(we keep it because it's the same id we have in a different database for a different microservice). So, if by some chance, after the same customer is committed to the database before the flush operation is actioned, we should update the record in the database with the one that arrived through the topic, if the last activity field on that one is after the last activity field on the db entry.
The problem we encounter is that, while the flush option is recognizes the newly committed consumer and throws the ConstraintViolationException, when it gets to the find line it returns the customer we try to persist above, not the customer in the database
The code breaks down like this.
try{
entityManager.persist(customer);
//at this point, I insert a new customer in the database with the same id as the one I've persisted
entityManager.flush();
}catch(PersistenceException e){
if(e.getCause() instanceof ConstraintViolationException) {
dbCustomer = Optional.of(entityManager.find(Customer.class,
customer.getId()));
//update DB Customer with data from persisted customer if the last update date on the persisted customer is after the one on the db customer
}
}
I tried different options of transaction propagation, with no success, however, and to use the detach(customer) method before trying to find the db customer, however, in this case, the find function returns Null
Thanks
As soon as a flush fails, the persistence context is essentially broken. If you need to do something with the result of this code block that needs flushing, you need to do that in a new transaction in case of a constraint violation.
I've been struggling for few hours with this one and could do with some help.
A client sends an object that contains a list;
One of the objects in the list has been modified on the client;
In some cases I don't want that modified entity to be persisted to the database, I want to keep the original database values.
I have tried the following and various attempts to clear(), refresh() and flush() the session:
List<Integer> notToModifyIds = dao.getDoNotModifyIds(parentEntity.getId());
MyEntityFromList entityFromClient, entityFromDb;
for(Integer notToModifyId : notToModifyIds){
ListIterator iterator = parentEntity.getEntities().listIterator();
while(iterator.hasNext()){
entityFromClient = (MyEntity) iterator.next();
if(Objects.equals(entityFromClient.getId(), notToModifyId)){
dao.evict(entityFromClient);
entityFromDb = (MyEntity) dao.get(MyEntity.class, notToModifyId);
iterator.remove(entityFromClient);
iterator.add(entityFromDb);
}
}
}
However, no matter what I try I always get the values from the client persisted to the database. When I add a breakpoint after iterator.add() I can check that the database value has not been updated at that point, hence I know that if I could load the entity from the DB then I would have the value I want.
I'm feeling a little suppid!
I don't know if I got the whole scenario here. Are those modified "entitiesFromClient" attached to the Hibernate session? If they are, the changes were probably automatically flushed to the database before you "evicted" them.
Setting a MANUAL flush mode would help you avoid the automatic behaviour.
First of all, I would enable the Hibernate SQL logging to see more precisely what is happening. See Enable Hibernate logging.
Checking the database in another session (while stopped in the breakpoint) will not help if this code is running within a transaction. Even if the change was already flushed in the database you wouldn't see it until the transaction is commited.
I am using spring, hibernate and postgreSQL.
Let's say I have a table looking like this:
CREATE TABLE test
(
id integer NOT NULL
name character(10)
CONSTRAINT test_unique UNIQUE (id)
)
So always when I am inserting record the attribute id should be unique
I would like to know what is better way to insert new record (in my spring java app):
1) Check if record with given id exists and if it doesn't insert record, something like this:
if(testDao.find(id) == null) {
Test test = new Test(Integer id, String name);
testeDao.create(test);
}
2) Call straight create method and wait if it will throw DataAccessException...
Test test = new Test(Integer id, String name);
try{
testeDao.create(test);
}
catch(DataAccessException e){
System.out.println("Error inserting record");
}
I consider the 1st way appropriate but it means more processing for DB. What is your opinion?
Thank you in advance for any advice.
Option (2) is subject to a race condition, where a concurrent session could create the record between checking for it and inserting it. This window is longer than you might expect because the record might be already inserted by another transaction, but not yet committed.
Option (1) is better, but will result in a lot of noise in the PostgreSQL error logs.
The best way is to use PostgreSQL 9.5's INSERT ... ON CONFLICT ... support to do a reliable, race-condition-free insert-if-not-exists operation.
On older versions you can use a loop in plpgsql.
Both those options require use of native queries, of course.
Depends on the source of your ID. If you generate it yourself you can assert uniqueness and rely on catching an exception, e.g. http://docs.oracle.com/javase/1.5.0/docs/api/java/util/UUID.html
Another way would be to let Postgres generate the ID using the SERIAL data type
http://www.postgresql.org/docs/8.1/interactive/datatype.html#DATATYPE-SERIAL
If you have to take over from an untrusted source, do the prior check.
I use MyBatis 3.1.
I have two use cases when I need to bypass MyBatis local cache and directly hit the DB.
Since MyBatis configuration file only have global settings, it is not applicable to my case, because I need it as an exception, not as a default. Attributes of MyBatis <select> XML statement do not seem to include this option.
Use case 1: 'select sysdate from dual'.
MyBatis caching causes this one to always return the same value within a MyBatis session. This causes an issue in my integration test, when I try to replicate a situation of an outdated entry.
My workaround was just to use a plain JDBC call.
Use case 2: 'select' from one thread does not always see the value written by another thread.
Thread 1:
SomeObject stored = dao.insertSomeObject(obj);
runInAnotherThread(stored.getId());
//complete and commit
Thread 2:
//'id' received as an argument provided to 'runInAnotherThread(...)'
SomeObject stored = dao.findById(id);
int count = 0;
while(stored == null && count < 300) {
++count;
Thread.sleep(1000);
stored = dao.findById(id);
}
if (stored == null) {
throw new MyException("There is no SomeObject with id="+id);
}
I occasionally receive MyException errors on a server, but can't reproduce on my local machine. In all cases the object is always in the DB. So I guess the error depends on whether the stored object was in MyBatis local cache at the first time, and waiting for 5 minutes does not help, since it never checks the actual DB.
So my question is how to solve the above use cases within MyBatis without falling back to the plain JDBC?
Being able just to somehow signal MyBatis not to use a cached value in a specific call (the best) or in all calls to a specific query would be the preferred option, but I will consider any workaround as well.
I don't know a way to bypass local cache but there are two options how to achieve what you need.
The first option is to set flushCache="true" on select. This will clear the cache after statement execution so next query will hit database.
<select id="getCurrentDate" resultType="date" flushCache="true">
SELECT SYSDATE FROM DUAL
</select>
Another option is to use STATEMENT level local cache. By default local cache is used during SESSION (which is typically translates to transaction). This is specified by localCacheScope option and is set per session factory. So this will affect all queries using this mybatis session factory.
Let me summarize.
The solution from the previous answer, 'flushCache="true"' option on the query, works and solves both use cases. It will flush cache after every such 'select', so the next 'select' statement will hit the DB. Although it works after the 'select' statement is executed, it's OK since the cache is empty anyway before the first 'select'.
Another solution is to start a new session. I use Spring, so it's enough to mark a method with #Transactional(propagation = Propagation.REQUIRES_NEW). Since MyBatis session is tied to Spring transaction, this will cause to create another MyBatis session with fresh cache every time the method is called.
By some reason, the MyBatis option 'useCache="false"' in the query does not work.
The following Options annotation can be used:
#Options(useCache=false, flushCache=FlushCachePolicy.TRUE)
Apart from answers by Roman and Alexander there is one more solution for this:
Configuration configuration = MyBatisUtil.getSqlSessionFactory().getConfiguration();
Collection<Cache> caches = configuration.getCaches();
//If you have multiple caches and want a particular to get deleted.
// Cache cache = configuration.getCache("PPL"); // namespace of particular XML
for (Cache cache : caches) {
Lock w = cache.getReadWriteLock().writeLock();
w.lock();
try {
cache.clear();
} finally {
w.unlock();
}
}